MCPA MuleSoft Certified Platform Architect Level 1 – Implementing Effective APIs Part 2
Hi. In this lecture, let us discuss about the Any Point VPCs and understand them. Any point platform also allows the configuration of any point VPCs for a particular any point platform organization or business group within that organization. We can do at both levels MuleSoft applications deployed to Cloud Hub can then be deployed to one of these Any Point VPCs instead of the shared worker load which we discussed in the previous lecture.
ANI Point VPC is again backed up by an AWS VPC which is on the AWS Cloud Platform, and it is generally provisioned in the requested AWS region. And then this is exclusively assigned to its owning Any Point Platform organization. That means only authorized users from this Any Point Platform organization may deploy mule applications to this any Point VPC the result of any such deployment is as always one or more cloud workers, but this time in this particular private AWS VPC in the prison, okay, not in the shared VPC among all other multitenant workers.
And any Point VPC is created and managed by any point platform users through the any point platform web UI or it can be managed through the Endpoint platform APIs or any Point CLI which is command line interface. So by doing this implicitly, what happens is it will operate on the AWS VPC which is underlying VPC’s underlying Any Point VPC which is assigned exclusively to the managing of the Any Point Platform Organization. But AWS level access to this AWS VCC is not provided. You have to remember this, okay? The administrators can actually go and take their Any Point VPC creating or deleting and again recreating, but the direct access to the AWS to alter this underlying AWS VPC is not provided. So that wavelength VPC can be used only for the Any Point Platform purposes and cannot be used for any other AWS activities like to create own easy to instances, etc.
Similarly, an existing AWS VPC also cannot be reused to make it as an Any Point VPC because the customers may have their own AWS VPC because maybe they are having AWS account for a different purpose. And if they may think okay, can we use that VPC only as part of MuleSoft VPC also Any Point VPC? No, it is not possible. It can be created only from the Endpoint Platform a separate Any Point VPC and cannot be customized and reuse an existing AWS VPC.
However, if they want to do or link the VPCs of AWS VPC to this Any Point VPC, that can be done via Peering VPC peering which we will discuss later. And Any Point VPC generally builds on the characteristics of the same Cloud Hub shared worker cloud only, okay? Similar to that, but it has some extra options and services. So let’s see the benefits. In general, at Any Point VPC is similar to a virtual network in a given AWS region. The private IP address range and region must be defined when creating the VPC and cannot be changed. So once done, it is done. So you have to do the CADR notation calculations and anticipate the number of IPS required in your organization, et cetera.
Plan for like five years ahead, plan how many applications you would be deploying into your cloud in the next five years, anticipate that and come up with an IP range number, okay? Do not just go with a short number, because once done, it is done and cannot be changed. But it doesn’t mean we cannot recreate. We can attend delete and recreate, but it involves a lot of work and testing and a lot of unnecessary disturbances. Every Cloud Hub worker receives a private IP address from the address range of its VPC.
So once created a given address range, right, then any application we deploy and Cloud Hub worker is created. IP address which is private is allocated from this range of IP addresses in that VPC. So not like the Cloud Hub shared worker cloud where a dynamic one will be created which is unknown to us. Every Cloud Hub worker also receives a public IP address as well, but unfortunately it is not under the control of the Any Point VPC administrators or admin. It will be generated by the new software. The administrator of an Any Point VPC has fully control over the firewall rules of that particular VPC. Okay, so just like how we discussed in the previous lectures in this section, the default firewall rules that comes once the VPC is created are with the ports 8081 and 80 82, allowing the traffic from anywhere, meaning over the internet. So these are public access ports and also rules for 80 91 and 8092 ports, hrtp and Https respectively, allowing the communication within the VPC.
Okay, so these are the default rules that comes, but the admin has full control to add or modify the firewall rules in that VPC. The DNS lookups of cloud hub workers in an any point. VPC can use customer supply DNS servers for particular DNS names. Okay? So if they have own DNS domains, customers can use their DNS servers to perform the lookup. Assuming the DNS server IP addresses are reachable from the Any Point VPC because to do the lookups, those server IP addresses should be reachable from the Any Point VPC. Otherwise it cannot be done.
So if it is on on premise, then we will discuss how the connection can be established from Any Point VPC to your on premises in the next vertical slides. Any Point VPCs and Any Point platform environments can be in a many to many relationship, meaning they can be created at environment level. Often the production environment is assigned to a production VPC and all other environments are assigned to like sandbox VPC or testing VPC etc. The production environment can also comprise additional Any Point VPCs in other regions like for Dr or High Availability. Okay.
The Cloud Hub dedicated load balancers can be instantiated within an any point. VPC we will discuss about the load balancers like SLB and DLB soon in the next lectures. An Any Point VPC can be connected to an on premises network using an IPsec tunneling or AWS Direct current and also Any Point VPC can be privately connected to your customers AWS VPC using VPC Peering. So if a customer already has an AWS account and they have the AWS setup and VPC like we discussed before, the same VPC unfortunately cannot be used for mules of Any Point VPC to deploy the workers onto that custom VPC. But we can create VPC peering between the custom AWS VPC and this Any Point VPC so that they can be communication between them. Similarly for your range network you can do IPsec tunneling or AWS Generate Connect to establish the connection. It will be our VPN actually.
So what you are seeing in front of you, the figure depicts that to Any Point VPCs were created in a cloud hub region. The production VPC is associated with the production environment and holds two cloud hub dedicated load balancers. Okay, so maybe one for the public APIs and other for the virtual network only to communication between VPC call like private APIs, while the sandbox VPC is associated with the development and staging environments and hosts four cloud hub dedicated load balancers maybe separating public and VPC private APIs for the two environments staging and development. So two for Dev and two for staging. Several AP implementations from all tires of APILOT connectivity have been deployed to our environments using two cloud hub workers each.
So you can see there are two experience APIs, two process APIs and two System API worker One worker Two so they are highly available. The cloud hub workers for a given mule application are assigned to different Availability Zones. So in AWS if we deploy to different Availability Zones and background by a load balancer, it is highly available. The cloud hub shared load balancer SLB for the region is available for public APS but may go unused given the configured cloud hub dedicated load balancer. Because there is already a DLV as well, we can see even if there is a SLP mostly it won’t be used because DLV has more features. Right now let us see how we can connect an Any Point VPC to other networks. Because Any Point VPCs are for the exclusive use of the owning Any Point platform organization, they can be connected to other types of networks the organization controls, okay? In particular, an Any Point VPC can be privately connected to an on premises network using an IPsec tunneling or AAA Jet Connect like we described a moment back.
And when we do this, it is basically our an IPsec VPN and can use more than one Availability Zone of the AWS VPC, okay? That means it is not necessarily a single point of failure. It can be using multiple AZ so that availability zone so that it is not a single point of failure. Okay? The connection from there to this way VPN can be established by a backup VPN as well another AWS VPC using VPC peering. Okay, so one is to connect to Android network and another way is to privately connect to AWS VPC using VPC peering. Again, like we discussed a moment back, but you have to remember one thing it is not possible to do VPC peering between AWS VPC and Any Point VPC if the other AWS VPC is also originated due to some other Any point VPC.
Okay? So we can’t be outsmarting the mule software. We cannot do like okay, if there is another endpoint VPC, any Point platform has any point VPC. So behind the scenes there will be an AWS VPC. So we cannot try to link or do VPC peering between our Any Point VPC and their doubles VPC which is the backdrop on top of the anypoint VPC. Okay? So indirectly what we tell is any Point VPCs cannot be peered with each other. So two mules of any one VPCs cannot be paired with each other. Okay? Both VPCs are typically in the same region so that it will be easy for the VPC peering it is not necessary, but for faster reasons or security reasons, the VPC peering is suggested if both the ABPC meaning Any Point VPC and the AWS VPC in the same region.
And the third one with respect to VPC pairing is actually this peering helps if we make sure it is in the same region and all it helps avoiding the traffic over the public Internet or communication between the any point VPC and the AWS VPC, okay? Because all traffic stays within the AWS infrastructure which results in higher bandwidth and reliability and lower latency and better security as well. Both types of connections, whether it is a private connection to your on premise or private connection to your AWS VPC, both are private and hence the private IP addresses of the Any Point VPC will come into play. Okay? But you have to remember that these private addresses are always dynamically assigned.
So you can see the representation of the Ipsect tunneling and the VPC peering here. So the first diagram shows the private network connection via IPsec tunnel between an on premises network which is on the right side and the AWS VPC underlying and Any Point VPC on the left side. In the second picture you can see a private network connection via VPC peering between an AWS VPC which is on the right side and the AWS VPC underlying and Any Point VPC on the left side, which implies that the same organization controls both VPCs. Okay? So let us move on to the next lecture. Happy learning.
In this lecture, let us understand about the cloud Hub load balancers. One form of the Cloud Hub load balancer service is the cloud Hub shared load balancer. Every mule application deployed to Cloud Hub receives a DNS entry pointing to the Cloud Hub shared load balancer. Okay? So the Cloud Hub Shared Load Balancer in a region is shared by all new applications in that reason. And the cloud Hub share load balancer builds on top of the AWS elastic load balancer. Okay? If you are aware of the AWS or the acquaintance with the technology, AWS has this ELB concept called Elastic Load Balancer, which is used by the mule soft to provision this Cloud Hub share Load balancer. So the Http and Http requests that the APN locations send to the Cloud Hub Share Load Balancer on the port 80 or four, four, three respectively, are forwarded by the Cloud Hub Share Load Balancer to one of the mule applications Cloud Hub workers.
Okay? So it’s approximately round robin, but they will reach the mule application on 80, 81 and 8082 port respectively. If the request comes from the port 80, then via Http it goes to 8081. If it comes via four, four, three and Https protocol, it goes to 80 82 respectively. We discussed this before many times at the Firewall rules. The Cloud Hub Share Load Balancer therefore has to maintain the protocol state whether it’s Http or Https to correctly divert the traffic to the right application port via the Firewall rules. So by exposing only an Http and the point on port edge rate one or an Https and the point on port edge rate two, the mule application determines the protocol available to its API client.
Okay? So it has to be communicated that if it is Https, the H eight two port is available or else 8081 and the Firewall rule should be set accordingly. So only mule applications exposing Http or Https endpoints at 80 81 or 80 82 can be used for the Cloud Hub share Load Balancer, okay? There is no other way. Other port can be used if you want to make use of shared load Balancer. Meaning if you are on a shared worker cloud and using SLB, then you cannot opt for any custom port like in your APA implementation. The APA implementation, you cannot choose 8085 or 8585 port, okay? Something different and expect to hit that mule worker application via the Shelby or direct the worker URL, okay? It is not possible.
Compulsory. Like we discussed for Selfbie, the default Firewall rules are always 8081 for Http and 8082 for the Https and you have to compulsory use one of these ports first on the protocol either Http Https for hitting that particular request on a shared load balancer, okay? And also the Shared Load Balancer terminates the TLS connections and uses its own services certificate. So we cannot import our custom sets and to the SLB, this is also something we discussed before, let us now discuss about the Cloud Hub dedicated load balancers. Okay, so the other form of the Cloud Hub Load Balancer Service is the Cloud Hub dedicated Load Balancer which is available to new applications deployed to an any point VPC only.
Okay, for the applications that are deployed to any point VPC, the Cloud Hub dedicated Load balancer can be used. But same time the Cloud Hub shared load balancer also is available. So they get both shared load balancer as well as the dedicated load balancer unless the organization decides to drop the shared load balancer and not share it with anyone. So one or more Cloud Hub dedicated load balancers can be associated with any point VPC. It’s not necessary that only one DLB should be there for one application, there can be many. So each Cloud Hub dedicated load balancer receives a private IP address from that address range of the VPC. We discussed this already, right? So always a private address will be assigned from the VPC range which is used for creating an endpoint VPC as well as one public IP address which is not in the control of the VPC administrators. Just like for SLB, even for DLV as well, the Http and Http requests that are hit on the 84 four, three ports for Http and Https respectively are forwarded by the Cloud Hub dedicated load balancer to the mule applications via the ports 8091 and 8092 respectively, like we discussed before.
So because Cloud Hub dedicated load balancer sits within any point VPC, the traffic between it and the Cloud Hub workers is internal to the VPC and can therefore use the customary internal server ports 8091 or 8092. Okay? It’s completely within the VPC sector. Flexible mapping rules can be defined as well for Dlbis. For manipulating the request URLs and selecting the Cloud Hub mule applications to the workers of which the requests are forwarded, it’s not necessary that okay, if your worker URL is something big, the same will be carry forwarded. We can manipulate and give a short URL path or something in the base URL in the DLB.
Then, based on the resource in the API, the request can be forwarded to the correct worker by defining some proper mapping rules. The upstream protocol for the VPC internal communication between Cloud Hub dedicated Load Balancer and the Cloud Hub workers can be configured to be Https or Http. Okay, so it is up to the setup.
So you can decide that okay, the clients will hit the Cloud Hub DLB with Https and then you decide to drop the certificate there and proceed with just Http within the internal communication from DLB to your workers. Or you can keep the same Https all the way. This is not like the shared Load balancer where the shade is maintained all the way till the request hits the workers. The IP addresses of permitted API clients to Cloud Hub dedicated Load Balancer can be whitelisted also okay, so we have a feature where we can create kind of two DLBs.
One DLB for the internal organization where only the internal teams can use the DLB URL to access the APIs and one can be for public people which can be accessed to anyone over the internet. So in this scenarios we can whitelist the DLBs with the particular IPS of APA clients saying okay, if your organization has an IP range then we can whitelist one of the DLB with the IP range of your organization. So the only APA client in your organization can hit which is internal DLB and the public one can be accessed without whitelisting.
Okay, the Cloud Hub dedicated Load balancers also perform the TLS shermation just like the SLB. But the benefit here, which we get the extra feature is Cloud Hub Load Balancers. Dedicated load balancers must be compulsory configured with the server side certificates for public private key pair for the Https endpoints that are exposed via DLB. Okay? If you are exposing Https via DLB, then compensatory how to import your server side certificates. Okay, but optionally you can import the client side certificates as well. If you want to enforce a TLS mutual authentication. Okay, if a client wants to do a video TLS Mutual authentication then client sides can be added to the Cloud dedicated load Balancer so that it performs TLS Mutual Authentication.
What you’re seeing in front of you the picture is an API implementation or a mural application app which is deployed to an endpoint VPC under the management of the US any point platform control plane. Okay, so this particular app exposes Http, the Https endpoints on ports 8081 and edge rate two for the public and as well as ports 80 91 and a 92 for the internal VPC. Okay, this arrows, blue arrows depicts that the default access routes for API clients inside and outside this VPC both directly hitting the Cloud Hub workers.
And you can see as well they are hitting the workers via the Cloud Hub Shared Load Balancer and also via the Cloud Hub dedicated Load Balancer. Okay, but the DLB actually needs mapping rules. That’s why you see within the DLB icon there is like a small boxes and arrows, these are called they also represent the mapping rules. But we are not explicitly showing the mapping rules here in this diagram. Okay, that’s about the load balancers on the Cloud Hub. Let us move on to the next lecture in the course. Happy learning.
Popular posts
Recent Posts