MCPA MuleSoft Certified Platform Architect Level 1 – Implementing Effective APIs Part 7

  1. Static Fallback Results

The last one is using a static fallback result. So after all previously discussed options for recovering from a failed API invocation of failed, then the only thing left or can be done, if applicable, for your API is to send a prepared static result back to your API client as a response. So what kind of API is generally fit for this last strategy? Or like suppose if your API is a get API to retrieve the countries, states or currencies or some products and all.

Okay, this can be static because the countries we know can be listed on a static one list of all the countries, states, currencies and all. So this can be formed as a static response in your code and returned back. All right? This way you can at least as a very last fallback option satisfy your APA client or make your order accepted by using the static.

Results may result in some discrepancy for some HC scenarios, but at least you can weigh in. Like 90% of your orders will be proper and only 10% may fail. Still, it’s a profit for company compared to not accepting all the 100% orders during the downtime of the API, right? So how we can do is the API clients which are implemented in mule application and executing on a mule runtime, how many options for this kind of things for loading the static resident and all.

So one option is to use the properties file, which generally we all do to configure like a comma separated list of countries associated in the property file. And then we can get that in the transformer data view and then send it as a result. Or you can have a separate caching scope, not the actual transaction caching scope, but the static result itself on the application startup can be cached as a proper response so that we don’t pass the property full every time and read the response from there and send it back.

This is the last thing that we can do to overcome the APA from the failure and have it fault tolerant. Okay, so this is the order. First one being the timeouts retries, then circuit breaker, then fall back API. If possible, call the fallback APA in parallel. And then if that is also not working, then go to the retrieval of the last cache result of that particular API. And if that is also not possible, then go to the static fallback result. Just order the static thing, you set it in the code, you just send it back. Okay? You have to just see what are applicable implementing this order to make it the most robust API you have. Okay? Happy learning.

  1. CQRS and Event Sourcing

Hi, this is our last lecture in this section. In this lecture let us understand what CQRS is, which stands for Command, Query, Responsibility, Segregation and how it helps in our integration projects, especially in the APIdriven, architecture based projects, as well as the event sourcing data projects. Mostly even the architectures also supports this particular CQRS pattern and comes very handy. So before going to that, let us understand a normal traditional way how we used to implement the projects a little time back. Usually in this slide, what you’re seeing in front of you is a typical UI, where a UI is trying to interact with the back end via obviously a particular middleware or an application to update some details, as well as sometimes to read the details from that particular data source. Little back.

How things used to be done is, to be frank, some of the products even today implement this in the same manner is to have a single application or module or an interface, whatever you’d like to call, which interacts with that particular data source. Okay? Like the same connection pool will be used to establish the connection to that back end database or the system, which could be ERP or some other system. And using the same connection, they try to do writes, which means to update stuff or create stuff. And also using the same connection and the interface or application or module, they would try to do the reads as well to query some details from their particular data source. This works well for small applications, but if the application is bigger or the transaction volumes are high, this comes very problematic with respect to various issues like degradation of the performance and the lock issues on the tables or the data store where we are storing or reading the data from.

So, this is the wall style where UI tries to interact and then the application will accept the request and try to write it into the DB. And then whenever the read request comes in, it goes and queries from the DB and takes it back. Okay? And usually the same connection pool is used and a single modular application tries to interact with the back end system or the data source. And say in this model, if say there is a huge requirement for data or the load to be handled, then the whole application should be scaled like clustered horizontally or scaled up by increasing the resources like CPU or memory. And this all happens synchronously. So a lot of the advantagement and all will kick in.

Okay? Now let us see how CQRS handles this in CQRS manner, which stands for again Command, Query, Responsibility, Segregation. This pattern or a style. Okay, this is not like a technology or an architecture, it’s just a trend or style. Style separates the responsibilities of command, which meaning some actions to be done and query to read something. Okay, so how it does it suggests that we have to implement the command model which is create or update kind of thing, the actions to be done as a separate module or interface or application and the query related part which is reading and all should be implemented in a separate interface or application if it requires to have a separate connection pool. Separate management to be frank, even if it is separate server altogether, it recommends to do that to have one application running on one server for query model and one for command model. So the advantages we get because of this is we can optimize the models as per the requirement, meaning the reads and writes can be optimized and scaled independently.

Say if in your project or your business scenario your queries are more or highly demanding than the rights meaning the creator update, then with the secures approach you can invest your money, your time and all to scale the query model alone, either vertically or horizontally and keep minimal resources for the command model. Could be other way around. If you have more demand on the command model than query model, you can do so. Similarly, the other advantage is tomorrow you can easily separate the data sources for command model and query model. Meaning. So if you don’t want them to share the same live DB or live back end system to do both command and query, and you want to eliminate, say, some lock issues or row locks or table locks, then you may have two data sources where they get sync time to time.

They are near real time synchronous, so that the command model will go and talk to the DB, which accepts the new records, which is active real time and the near real time replica. DB will serve as the database for the query model where it reads it so that there is no conflict between the write and read and it will be faster as well. Maybe that read replica DB can have higher indexing or more data resources on the data source side. Correct? So this way you can optimize independently. To be frank, you can have a different query model logic together because it has separate connection and separate things. You can point it to some third party tool as well. It may not be an actual ERP replica or something like that.

Correct. So these are the advantages you get. So you can have independent scalability for each of them, it can be independent of persistent mechanisms, meaning like I said, one can be an ERP live one which command model interacts with and other can be completely a third party query. You can have an ear for J database or some other searching database or no SQL database, right? And commands are typically implemented by queuing and executing asynchronously why this makes your UI or front end user interactions very faster.

Because now you have an independent model for command and query and anywhere they are near real time synchronous, you can accept the query from the UI and just queue it and process it asynchronously so that you throttle your request and process nicely without load on the back end system. And queries can be optimized for the exact needs of APA clients. Like I said, because queries altogether a different model, it can be read from different sources, data sources from back end. Also can we optimize the way how the clients want APA clients want, right? So this is the advantage you get.

I don’t want to call it as a disadvantage, but only lack of the thing are to be remembered here. I would like to say how to remember. One thing here is that the data between the read and write models is near real time. The read one would be slightly out of the date from the active data source, which is where the write happens. Okay? Now, one thing is the CQRS is again a design choice. Like however, we discussed about the fault tolerant approaches and how you have to choose where it fits and dependent on the design choice of viewers. Same way CQRS is also a design choice. Okay? So the things you have to remember for this before choosing the design is that with this model, you have eventual consistency between the reads and writes, meaning they are not real time sync. Both read and write writes will have the real time, whereas reads will have near real time.

It will get eventual consistency. Okay? And second thing is it may want to implement your APA separately for read and write, meaning command inquiries, right? Like I discussed, to make them scalable independently or to have a different way of implementations or point to different data sources for whatever reason you may have to separate it. So all this to be remembered when you choose the CQRS style. Okay, let us now see how the CQR style fits in our Create Sales Order API, which we have been discussing across the course. This is a typical model, so how you can implement the CQRS. So what you can do is you can have the Retrieve sales order, which is read part like query model, and the Create Sales order, which is the command model in a separate system API.

Okay? So in general as well, if you remember when we discussed about the system API implementation, what should be the way to do it? We recommended that to have each functionality as a separate API itself. As a separate system API itself. Correct. So that way the retrieve is a separate API and creates a separate API. But for whatever exposure reason, if your organization has decided no, let us have a single system API with separate resources like Create and Retrieve as some of the resources in the same system API, then the CQRS won’t fit in that particular API.

For this specific Create Sales order and Retrieve Sales order, you may have to split them into two separate APIs if you want the secure as behavior. Okay? This way the interaction will be separately done in different mule apps or mule APIs to the ERP. And anywhere in the process layer of the Sales Order API they can be separate resources itself. There we may not split them because the actual abstraction and all will happen in the down layer, right? The bottom layer with the system layer.

So there itself we have given a provision to separate the resources as a separate API. So process API can be a single API with a specification having create sales order and retrieve sales order as separate resources. So, not an issue there. Same with experience API as well. And the end users communicate using one single API specification. All right, now.

So tomorrow, if you want to scale the Get Retrieve sales order the query model to have high availability, ultra high availability by having more workers, like five workers, and vertically scale it with zero two core instead of zero one core, you can do so and leave the Create sales order app with the minimal ones. Right? Now this is the one way. So this CQRS can actually evolve with time or mature over time as per the needs you want. So, here what you are seeing is that both of them are interacting with the same ERP. Maybe you are starting with this model.

Now, tomorrow, let’s say slowly you have come up with the multiple ERPs which is nothing but one for the queries which is exact read Replica of the active ERP where the command model interacts with. Okay? So all the Create Sales Order part which are commands will go and interact with the active ERP. And ERP say your ERP may have a feature to have a read Replica ERP which it can do in real time synchronization out of the box in your ERP product capability. Then that data will be synced to the read Replica ERP and your query model which is Retrieve Sales Order will interact with that separately. Everything else is testimony.

Thing is that most probably you’d have to change your connection details inside your Get Sales Order and the Create Sales Order. Okay? To point to respective ERPs. Now, let us say if your ERPs does not have that capability or why to take ERP, it could be some other system or other home build system as well, right? It may not have the capability to do read Replica automatically, right? Synchronization then what comes into picture is that is where you can leverage the concept called Event Sourcing. Okay? So what Event Sourcing is it is a turn new one which you may be hearing or you may have already heard. But this terminology so far is usually used in the database world, okay? Because whatever the near real time synchronization we have been talking the trade Replica one between the databases out of the box, that is nothing but the Event sourcing in the DB world.

So what happens is as and when there are updates coming or create records coming in, the DBE database internally will post events saying hey, I have a new create, I have a new update on existing record. This is the record ID, blah blah, blah. Right? Then using those events, the database guys have come up with this real near time synchronization by capturing those events and syncing onto other copy databases. Okay? Now the people are bringing that up into the application level as well into the application layer so that applications can also follow the same concept and do or implement their own way of the synchronization.

So you can leverage the same for your homegrown legacy ERP or legacy database or water system. It is so such places if they do not have the out of the box near real time synchronization like a read replica copy then you can again use MuleSoft to write a small adapter called sync adapter which listens to that events as the commands come in from the process layer API to this create sales order system API. So using the event sourcing that can publish that particular command onto any point MQ or other messaging system and your Sincare app can listen to it and write it to the third party data source which will be acting as your read or query model database, okay? That way that adapter will make sure they are near real time sync by giving the eventual consistency. And your query model will point to that third party data store where a synchronization is happening.

This way, event sourcing will help you to achieve that eventual consistency and synchronize your multiple data sources using the Adapter sync adapter. Okay? It’s a custom written one. This is not like an API or anything. It’s just a typical USB adapter. You write to sync by listening to a queue, which is any point MQ or a third party JMS queue, okay? And synchronizer put it in the other data sources. All right? Nothing else will change with this. It will say if you want to scale your applications separately, let’s say the get one, you want to scale it to five workers and zero two vicors. It can be done and leave the create one with 0. 1 V core and two workers, right? So this level of flexibility comes with the CQRS and demands can be met as per the need, et cetera. Okay? So this is what is about the CQRS and the event sourcing. Happy earning.

img