MCPA MuleSoft Certified Platform Architect Level 1 – Implementing Effective APIs

  1. API Implementations

Hi. Welcome to the new section in this course. In this section, we will heavily focus on the implementation of effective APIs. The objectives of this particular section is to understand about auto discovery of API implementations implemented as mule applications. Then we see how API connectors us are helpful. We understand about cloud hub. We see how we can choose an object store in a particular cloud hub setting. Then we apply strategies that help API clients guard against failures in API invocations.

We also understand the role of CQRS, which means command, query, responsibility, segregation and the separation of commands and queries in API Led connectivity. We then understand the role of event sourcing. Let us first understand about the API implementations how we can develop APA implementations for mule runtimes and deploy them to the cloud hub. API implementations can be of two types. They can be either developed as mule applications for the mule runtime, like in any point studio, or the flow designer on the any point platform and deployed to an any point platform runtime plane such as cloud hub.

And along with this, how they can be managed is with APA management analytics that are provided on the any point platform control plane. We have already seen the control plane aspects like APA manager and analytics in the previous lectures, right? So, the other type are developed for other runtimes non neural runtimes, such as in node, JS, springboot, et cetera. And then they are deployed to the matching runtimes outside any point platform on whatever technology they are built on. However, although they are not deployed onto the any point platform runtime plane like cloud hub, they can still leverage the any point platform control plane.

So, with the API management analytics, et cetera. Provided by any point platform runtime plane control plane, but still indirectly, they will be hosted on a realtime platform runtime plane. How? Because for non neural runtimes, the applications use APA Proxies, okay, to enforce the policies and et cetera. The APA proxies are nothing but like we discussed in the previous sections and lectures, APA Proxies are nothing but a component that runs on the mule runtime. So which needs any point platform runtime plane. Okay? Now let us go to the next part, which is auto discovery of the APA implementations. Because we have runtime plane components where the mule runtimes run and we have control plane components where they help to manage and analyze the APIs.

How do these two get linked up? How do each API gets linked up to a particular API instance on the control plane from runtime plane? You already know the answer for this. We have seen some demonstrations with a live code written and deployed on the runtime and then get it managed on API manager in the previous lectures where in the code that we have written in Any Point studio, if you remember or recall, we have provisioned API name and API version in the Any Point auto discovery in the studio. Correct? So that is what we are going to discuss.

So the APA implementations that are developed as mule applications and deployed a mule runtime should always be configured to participate in auto discovery. So this can be done in the global configuration settings part in the project, where we have to add the configuration called auto discovery and provide the APA instance version and the name so that it gets order registered like we did in our previous demonstration. If we do not have the APA name and version in hand at the time of implementation, then we can leave the placeholders the configuration and substitute them at the runtime properties in the runtime Manager.

Okay? So when the API implementation starts up, what happens then is it automatically registers with API any point APA Manager as an implementation of a given API instance, which means it implies and uses the APA version and the API name for a given environment. Then it receives all the APA policies configured in the any point APA Manager for that API instance. So, if you remember, we said the actual policies are never bundled up with the APA implementation. They are separate. Right? We only told that at the time of runtime, when the request hits from the APA client to the APA implementation, the metadata is kind of stored and using the metadata, the policies are downloaded from the APA Manager.

So that metadata is nothing but this order discovery. So once there is an order discovery and it registers so that continuously polls and whenever requests come in, it receives all APA policies configured in the APA Manager for the instance and downloads them the APA implementation. One thing you have to know by default, refuses APA invocations until all APA policies have been applied. So this is called the gatekeeper feature of the mule runtime. Okay? You have to kind of remember this because this is an important terminology both for your projects as well as for your certification exam. So whenever you hear something called gatekeeper, okay, gatekeeper feature in the new soft any point platform, it means it’s nothing. But the gatekeeper duty is to ensure that if there is already discovery and if there are some policies applied for that AP instance on APA Manager, then once the policies are downloaded, unless all of them are properly applied on your renewal runtime, your requests or the APA invocations from the client will be refused.

So this suggests that the auto discovery is kind of a form of auto registration of an AP implementation with APA Manager. So this is how the bridge gets established between the runtime plane and the control plane, between the API implementation runtime and the API instance on the APA Manager. Okay? So what else you need to understand is about the API connectors as well how APA countries help in the implementation of these APIs. The connectors are actually some pre built processors that are available in the MuleSoft applications. Most of them are pre built, but should you have to create any custom component you can still create custom developed components for your projects. But Mussoft already comes with enormous number of countries so most probably your implementation can be achieved with one of those.

So APA platform has like over 120 Any Point connectors, many of which are bundled with Any Point Studio already, which are while others are available in the Any Point Exchange and Nexus maven repository of MuleSoft. So whatever are not available in Studio already can be still downloaded from the Any Point Exchange. So we can go to the right side palette in the mule soft Any Point Studio and we can say download or add a connector. Then the studio prompts us to log into the Exchange. Once we log into Exchange, we can choose the character we want which is already on the Exchange, not in the Studio and it downloads and makes it available for you. But there are some kind of counters which are of different categories such as Premium, select, meals of Certified and Community. So the Community and Meal soft Certified are mostly available and can be downloaded right away, whereas Premium and select Archery picked.

So you can think of like now you can brainstorm and see what all countries that can come handy for our business scenario which is sales order and you can have a list of them write it down. Or if you’re doing a project parallel across this course as we are moving, then you can try to implement one of the connectors, for example out of the mind. What we can think are like obviously a database connector for example to store some state in the external databases and a web service consumer connector for the Soap because we may call some external APIs and some ERP connectors because we have an ERP in our scenario, so what not.

So if it is safe then a SAP counter out of the box. So these are some of the out of the box countries we can use. So the existence of Any Point can address is one more reason why the organization elects to implement API implementations for them. You’ll run time using any point studio. Okay. Because instead of developing on a nonur run time and using only the control plane aspect of the platform doesn’t help much. So we’re because there are many out of the box connectors with lot of enterprise systems readily available, it is advisable to go with the mule soft runtime based JP implementations to make it easy both support wise as well as implementation wise. All right, so let’s move to the next lecture which is about the Cloud Hub technology architecture. Happy learning.

  1. CloudHub Technology Architecture

In this lecture let us discuss about the fundamentals of Cloud Hub technology architecture. Cloud Hub provides a scalable, performance and highly available MuleSoft hosted option for executing new applications such as those representing API implementations on mule runtime instances. For this purpose of the discussion, what we shall do is the term MuleSoft application is often used in preference to the term APA implementation because so far in the course and in the previous lectures we are using the term API implementation to discuss about implementation or functionality of an API. But just for this particular section or the lecture, let us use the term use of application to make more sense because the Cloud Hub fundamentally is not concerned with the kinds of services provided by the mule application it deals with. Okay? That is why let’s use mule applications.

So Cloud Hub itself and Mule applications deployed to Cloud Hub execute on AWS infrastructure. OK? This is the basic fundamental we have to understand as of today. As we speak in this course, the Cloud Hub itself and its mule applications are deployed to AWS infrastructure. Cloud Hub workers are nothing but AWS easy to instances of various sizes, okay? Running on Linux and mule runtime runs on a JVM in those easy to instances on Linux OS. So you can see a table below where how each core worker core we discuss or see on the Cloud Hub is related to Watt EC two instance and Watt EC two configuration on the AWS. So the Cloud Hub worker sizing is mapped against the EC two instance type as we speak today can be seen in the table. So there can be some new changes in the future.

But as of today, this is what the mapping between the core and the EC two instances as we speak, okay? Now the maximum number of Cloud Hub workers for a single new application is currently eight. This is fundamental number three. So matter maximum, the Cloud Hub workers can be scaled up to number eight maximum. Okay? Number of workers can be maximum eight. The size and number of Cloud Hub workers can be automatically scaled to match actual load. The Cloud Hub workers also execute Cloud Hub system services on the OS which is the Linux based and new runtime level so which are required for the platform capabilities powered by Cloud Hub. So for what things does the Cloud Hub runs the other services on the OS apart from the mule runtime?

So one is for the monitoring, the monitoring is of the mule applications in terms of CPU memory and the usage, et cetera, the number of messages and the errors occurred which allows any point platform to provide analytics and alerts based on these metrics. And then the other thing that it needs is the auto restarting. So it has to otherwise restart the failed Cloud Hub workers, including a failed mule runtime on an other functional EC two instance, okay? And for the load balancing only to healthy Cloud Hub workers. Then for provisioning of new Cloud Hub workers to increase or reduce capacity for the persistence sake like using object stores of message payloads or other data across the Cloud Hub workers of a new application.

And then the last also for the DNS entries for the Cloud Hub workers and Cloud Hub Load Balancer which we will see in detail in the later lectures in this particular section. Okay? So for monitoring and for the order restarting load balancing, all these activities we discussed. Now to perform this the Cloud Hub workers or run some separate tasks apart from executing the mold runtime on the OS. Okay? So what you are seeing now in front of you is anatomy of a Cloud Hub worker used by the AP implementation of our Create Sales Order Process API and the capabilities best stored on the APA implementation by Cloud Hub worker. Also by making use of the various AWS Platform services.

If you imagine our Create Sales Order API depicting in a picture which related to the AWS Platform services and how it would look in the Cloud Hub workers and this is how it would look and going forward, every mule application is assigned a DNS name that resides to the Cloud Hub Load Balancer. Okay? Every mule application also receives two other well known DNS names which result to the public and private IP addresses of all Cloud Hub workers on the mule application. So one is like the private IP addresses which are only rotable within the AWS VPC in which the cloud hurt resides. So only within the AWS VCC the private addresses can be accessible.

Cloud Hub workers of a mule application can also assign static public IP addresses. While private IP addresses are always dynamic, because there might be a real time scenario where when we are exposing say, a particular API or something, the consumers may request that they need the IP address of the provider to whitelist on their side so that they allow only outbound traffic to that particular IP address. Okay? Or if our Mule Soft application is a consumer to some other external API and those API providers may request the IP address of our mule application to whitelist our IP in order to allow our API consumer or Mul application to call their API, then also we have to provide because by default these are dynamic IP addresses. Mugs has also provided a way to assign static IP addresses so that we can share the same static IP to the external providers.

The DNS entries are generally maintained by the Route 53 component on the AWS platform. Okay? So all the DNS entries are Route 53 component on the AWS platform. So if you see the picture now, this figure is a very simple depiction of the general high level Cloud Hub technology architecture as it solves API clients invoking APIs exposed by APA implementations implemented as mule applications running on cloud Hub. The association of the client certificates with the Cloud Hub Load Balancer requires a Cloud Hub dedicated load balancer and is not supported for the Cloud Hub Shared Load Balancer, which we already discussed in the previous lectures that the shared load balancer cannot be used for the associating the certificates which are customer client ones.

We need a DLB. All right, before concluding this particular lecture, let us understand one more fundamental, which is the Cloud Hub Shared Worker Cloud. Okay? What is the shared worker cloud? Every mule application deployed to Cloud Hub gives rise to one or more cloud hurts in the AWS VPC. Correct? Because every application, if you are deploying like we discussed in the previous slide, it will result in a new Cloud Hub Worker, which is like an EC, to instance in an AWS VPC in a particular AWS region. By default, this is a multitenant AWS VPC shared by all new applications from all Any Point platform users deployed to the particular AWS region. Okay? It is not a dedicated AWS infrastructure for a particular client. Correct. It’s a multitenant one. So this is the so called Cloud Hub Shared Worker Cloud means, okay, it is to tell that you may have your own Any Point VPC like we discussed in the deployment models before. Still VPC was a private VPC, but the platform itself has the overall parent AWS VPC, which is a multitenant worker cloud.

Okay? Your workers are in a multitenant environment. So Cloud Hub, how what it does is it distributes the Cloud Hub workers of a mule application over the AWS Availability Zones available in the AWS VPC. The Mule soft application is being deployed to. Okay? Thereby it provides a higher availability implicitly in the firewall rules, like in the AWS terminology or Security groups. If you are acquainted with the AWS technology, AWS has this feature called Security Groups, which is like a form of state full Firewall. Okay? So the firewall rules of the Cloud Hub shared worker cloud are fixed. Okay? So how they are fixed, the TCP IP traffic from anywhere to port 80 81 and the 80 82 are fixed. 80 81 is dedicated to Http and 80 82 is dedicated to Https. And these are public open ports. Okay? So any traffic of TCP IP coming from anywhere to these ports is linked to Http and Https on the cloud of Worker and TCP IP traffic from within the AWS VPC to port 80, 91 and 92 on each cloud of worker also fixed within the AWS VPC.

Again, 80 91 is tied up with Http and 92 is tied up with the Https. So the picture you are seeing in front of you is nothing but a firewall rules for the Cloud Hub Shared Worker Cloud, which are also the default rules of Any Point VPC. Okay? So the default rules comes with these, like we discussed eight 1082-091-8092 from Open Internet, the age rate one and edge rate, two are dedicated. And for the VPC within the VPC, 8091-8092 dedicated. Fresh HDP is respectively. All right, now let us move on to the next lecture and discuss about the any point VPCs happy learning.

img