Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part6
So let’s look at one last reference file for our EB extension. And this is called a command. So let’s paste this into our EB extensions directory. And here we go. I’ll set the highlighting to be YAML, and then we can read it better. Okay, so it’s called container commands confit number three. And this is the way to specify commands for your EC2 instances to run as part of your Elastic Beanstalk deployments. So you have two types of commands you need to know about. You have command and container commands, and knowing the difference is very important going into the exam. So let’s look at commands first. And the documentation link is here if you want it. But I’ve extracted the most important part. You can use the commands key to execute commands on the EC2 instance. The commands run before the application and web server are set up and the application version file is extracted. So this is something that runs before everything else. So this could be used to set up some files, to set up some packages, or whatever you want.
Okay, the important thing to remember is that the commands run before the application and web server are set up and the application version file is extracted. So we’ve created a command. Well, I name my command “create a hello file,” and you can have a world file. And this could be whatever you want. “touch hello world TXT” is the command itself. So this will create a text file, and CWD is where I want my command to be executed. And I want the user to be able to easily execute it at home. Here we can set up commands, and we can set up as many commands as we want if we want to in YAML format. And you have more options; obviously, you can look at the options here, but they’re not necessary. So these are commands, and they run before everything is set up. Now we have container commands, and container commands are different.
So you can use the container commands key to execute commands that affect your application source code. So this is to modify the source code itself, and the containers command is run after the application and the web server have been set up and the application version file archive has been extracted but before the application version is deployed. So the way Elastic Beanstalk works is that when you deploy a new application version, it goes into a staging directory and is extracted. And here you can run the container commands, and after the container commands have run, the staging directory will be copied or moved into the production directory, where the application can now run. So these container commands can be run to modify your application files at runtime. Perhaps you wanted to include some information about environment variables and so on. So here I’ve created one container command in here.
It’s called “modify.” Index HTML, and it will insert the text-altered content into the index HTML. So that means that my index HTML file here at runtime will have some text added called “modified contents.” So that makes sense, right? And this is one of them. And you need to remember another kind of container’s command. So this one is for database migration, and the command is simply “do database migration.” Assume this is a more complex database migration command, possibly interacting with your RDS database. But here, you do not want every one of your EC2 instances to run the database migration. Because if you try to deploy your application to ten instances and they all run your database migration, that doesn’t make sense. A database migration needs to be run only once. And so they can use something called the “leader only parameter” to only run the command on a single instance. And so for this, you only set the leader. True. And so, as you prepare for the exam, keep this in mind: if you need to do something that only needs to be run once from within your containers, then the leader only on your container commands is what you need. The leader only exists for container commands. It does not exist for commands themselves. So now, why don’t we go right ahead and run this? By the way, if you need a reference file to understand the distinction between container commands and commands, check out this stack on Overflow. I think it’s pretty well explained. I hope I explained myself correctly. But if you need to read some more, this is a very popular answer on Stack Overflow that explains the difference between commands and container commands. So, okay, let’s go and try this out. So I’m going to go ahead and run an EB deploy cell log out of this instance. Here we go. And we’ll run an EB deploy to deploy our new application. So here we go. Our application is being zip-upped and put into an elastic beanstalk.
So let’s wait a little bit, and then we’ll SSH into our instance to see if the commands have been executed. So the environment update is now complete, and I’m going to SSH back into my instance. So let me look at the files here. There is now a Hello World. TXT file and that was my command that has been executed, that was saying touch, hello TXT and the container commands should have modified my index HTML file. So let’s check it out by going onto the internet and refreshing this page. And now it says, “Hello, world!” with “modified content.” So it was able to modify my index.html file at runtime using the container commands. And finally, for the database migration, well, we won’t see anything because we just did an echo. But you can trust me that it was only run on one EC2 instance because of the leader only true setting in this container command. Okay, so that’s it. Going into the exam, you must remember the distinction between container commands and commands. Also, keep in mind that container commands have a leader-only setting that allows you to run the command on a single instance out of perhaps ten. All right, that’s it. I will see you at the next lecture.
So here’s a quick primer on the Elastic Beanstalk features you should be familiar with. Let’s go take a look at them. They’re just small features, but it’s good that I mentioned them, and my conscience will be revealed. So if we go to Hello World and create a new environment, we have the option to choose a web server or worker environment. I’ll be going into worker environments later in this course, so we’ll just focus on web server environments. Click on “select,” and you can select the domain name and a preconfigured platform.
So we’ll just choose PHP as we did before. We’ll choose a simple application, and then I’ll configure more options. So this is just to show you the settings. So, in this case, we have a configuration preset that is low cost and free tier eligible for the capacity. The environment type is single instance, and it will be assigned an elastic IP address. So this is one way, and then as soon as we go into high availability, we have a load balancer and an auto scaling group attached to our Elastic Beanstalk environment. And this is better for production-ready environments. So low cost is advantageous for Devan, while high availability is advantageous for production. And that’s the only thing I wanted to show you here. So I’ll go ahead and cancel this. Next, we’ll just look at the environments in this room. Next, I will show you the application versions. So anytime we upload a new version and deploy it, it gets created as an application version. And as you can see here, we’ve done five of these deploys, so we have five application versions being created, and we can see that the latest application version is currently deployed to development. And so this is great.
So as we start uploading more and more versions into these application versions, you can imagine that over time there will be hundreds of them, and it turns out that there is a limit in Beanstalk that you can only have a thousand of them. So at some point, you need to get rid of these application versions, and for this, you have something called an application version Lifecycle.So by clicking on settings here, I have the application version lifecycle settings, and you can create a lifecycle policy that says that we can limit the number of application versions to, say, 200 versions, or we could limit them by age and say okay, after 180 days, you could get rid of them. And this is just to deregister them. You have the option of keeping the source bundle in S 3 or deleting the source bundle industry as well.
So it’s a good idea to retain the source bundle if you still want to delete it from the application versions in Beanstalk, but be able to somehow re-upload it at some point and roll back to it if you need to. And finally, you need to create a service role or use a service role at the very least to perform these application version lifecycle settings and delete these application versions. So here I’ll just enable a lifecycle policy and say I only want 200 versions of applications, and don’t worry, it will not delete this one ever if it’s deployed to an environment. So it will not delete an application version that is deployed to an environment. So we do this again to keep the number of application versions under control. But the reason we like to have many application versions in here is to be able to apply them to our stock environment, and this is effectively doing a rollback for us. Okay, okay, next for this environment, if I go and click on it and click on Actions, I have the option to clone the environments. So cloning the environment means taking all the settings and applying them to a new environment. So this is a really easy and quick way to create a new environment. We could also create clone environments using the Ebcli, and we’ll be seeing this later on.
So I wanted to show you the clone option, but I’m not going to use it. The environments can then be completely redone. So you can terminate the environment, and everything will go away. In Ebcli, this is the same as doing EB terminating. Or we can rebuild the environment, and rebuilding the environment will delete everything and recreate it for you. As stated here, if you have any attached Amazon RDS database instances, they will be deleted and then recreated. So this brings us back to the question of whether we want instances like RDS and DynamoDB or SNS Topics to be internal or external to our application environment. So I’m not going to click on “Rebuild,” but you get an idea of what it would do. And finally, the last thing I want to show you is managed updates. So it is possible for Elastic Beanstalk to apply patches to your environment. And for this, you need to create a managed update.
And so you can enable Manage Platform Updates in the configuration tabs here under Manage Updates. Here we go. And we could enable them by saying, “All right, let’s set up a weekly update window that begins at 2:00 a.m. on Wednesday.” And we want to do minor and patch work only. And if enabled, we can do instance replacements. So this is a way to keep our elastic beanstalk up to date. And this is why Beanstalk is such a great managed service. It’s because it will do updates for us on a weekly basis if we need to in a rolling matter, and they will be managed updates. So I’m not going to enable it as well, but I’m just showing you that it exists. And there is a way to keep your instance patched using Manage Updates in Elastic Beanstalk. So that’s it for this lecture. Just a little bit of, like, randomness here and there. But now that you’ve seen them, you’re supposed to know them. Then we’ll see you again in the next lecture.
Now, here is a very popular question from the exam, and the exam will ask you about deployment mode for Elastic Beanstalk and which deployment mode is better for which situation. So I want you to understand each and every option for Elastic Beanstalk deployments, because that’s really key to answering these questions the right way. We’ve seen this single instance deployment, and it’s great for development because you get one easy to manage instance with one elastic IP in one orchestration group, and maybe we talk to your database. All of this is in one AZ, so it’s very easy to reason about. And the DNS name maps straight to the elastic IP. There’s a second setup now, which is high availability with or without a load balancer, and it’s great for production-type deployments. So in this case, it’s a bit more complicated. But we’ve seen this architecture before.
We have an auto-scaling group, or ASG, and it will span across multiple availability zones in which, in each AZ, we’re going to get several one- or several-easyto instances, each with their own security groups, and they may talk to an RDS, maybe set up in multiple AZs as well, such as one master and one standby database. Okay, so all this is pretty familiar. And then the elastic load balancer will talk directly to the ASG and will connect to all the EC2 instances. And that ELB will expose a DNS name, which will be wrapped by the Elastic Beanstalk DNS name. So this is what we’ve seen in dev and production, and obviously you can customize this a little bit, but what happens when you want to update these deployments? Okay, there are four or five different types of deployments, and you must be familiar with all of them. The first one is called “all at once,” where you deploy all your applications in one go. Now don’t worry, I have graphs describing in depth all of these. I just want to give you a quick overview. So with “all at once,” it’s the fastest kind of deployment, but instances won’t be available to serve traffic for a bit. So you’ll get downtime if you go into a rolling update.
Then it will update a few instances at a time, also called a “bucket,” and then move on to the next bucket. Once the first bucket is healthy and updated, you get a slight twist on this called “rolling” with additional batches. And this is like rolling, but you spin up new instances to move the batch, such that your application is still available and always operating at full capacity. And finally, you’ll get immutable deployments, where you spin up new instances in a new ASG and deploy the version updates to these instances. When everything is ready, we’ll replace the entire ASG when everything is healthy. So this is a little bit high-level, and you probably have no idea what this means. So this is why I wanted to take my time with them and really show you with graphs and diagrams how these work. So let’s talk about it all at once. Here are our four easily identifiable instances, all of which run version one, which is blue in our application.
Then we are going to do an all-at-once deployment. So we want to deploy V2. And what happens? At first, Elastic Beanstalk will simply stop all applications on our simple two instances. So then I put it as “gray,” as in “they don’t run anything.” And then we will be running the new V2 because Elastic Beanstalk will deploy V2 to these instances. So what do we notice? Well, it’s very quick. It’s the fastest deployment. However, the application is experiencing downtime because, as seen in the middle, they are all grey and thus unable to serve any traffic. It’s ideal for quick iterations and development environments where you want to deploy your code quickly and don’t care about downtime. And finally, with this setup, there is no additional cost. Now let’s talk about rolling. The application will basically be running below capacity, and we can set how much below capacity we want to run, like running at capacity. So it’s called the bucket size. So, let’s take a look. We have four instances running V1, and the bucket size will be two for this example. So the first two instances are terminated, not the instances themselves.
The application on the instances will be stopped, and so they’re gray. But we still have the other two instances running in V1. As you can see, we’re only about half full. Then these first two instances will be updated. So they will be running V2, and then we will roll on to the next bucket or the next batch. And so that’s why it’s called rolling. As you can see, The bottom two instances will now have their application, V One, reduced to grayscale and then updated to V Two. And so at the end, we have all the Institute instances that have been updated to run the V-2 application code. As you can see, the application is running both versions concurrently at some point during the deployment, and there is no additional cost. Okay? You still have the same number of EC2 instances running in your infrastructure. And so if you set a very small bucket size and you have hundreds and hundreds of instances, it may be a very long deployment. Okay? Right now in this example, we have a bucket size of two and four instances, but we can have a bucket size of two and 100 instances. It will just take a very long time to upgrade everything.
Now there’s an additional mode called “rolling” with additional batches. And so in this case, the application is not running under capacity, just like before. At one point, we were only running two of the four instances. So that was below capacity. In this mode, we run at capacity, and we can also set the bucket size. And basically, our application will still run both versions simultaneously, but at a small additional cost. That additional batch that we’ll see in a second will be removed at the end of the deployments. And again, the deployment is going to be long. It’s honestly a good way to deal with pressure. So let’s have a look. We have four V1 EC2 instances, and the first thing we’re going to do is launch new EC2 instances with the V2 version. So now, from four instances, Elastic Beanstalk has automatically created six instances for us. So there are two more, and you can see that the extra two are already running the newer version. Now we take the first batch, so the first bucket of two, and they get stopped. The application gets stopped, and the application gets updated to V-2 Excellence.
Then the process repeats again, just like in rolling. So the application running V1 gets stopped, and then the application is updated to V Two.And so at the end, you can see we have six EC2 instances running V2, and so at the end of it, the additional batch gets terminated and taken away. So, what are you going to do with this? Well, now we can see that we are always running at capacity. The lowest number of easy-to-run instances running the application we have at any time is four. So sometimes we are running over capacity, obviously.
And this is why there is a small additional cost. It’s very small, but there’s an additional cost. And sometimes the exam asks you if there is an additional cost to this kind of stuff. Then we have immutable types of deployments, and these deployments also have zero down time.But this time, the new code is going to be deployed to new instances. So before it was on previous instances, now it’s deployed on new instances. And where do these instances come from? They come from a temporary ASG. So there’s a high cost. You double the capacity because you get a full-new ASG, and it’s the longest kind of deployment. However, as a bonus, you get a very quick rollback in case of failures because to simply mitigate failure, you only need to terminate, not you. But the elastic beanstalk will just terminate the new ASG.
So it’s a great choice for PROD if you’re willing to take on a little bit more cost. So here’s the idea: We have a current ASG with three applications, vOne running on three instances, and then we’re going to have a new temporary ASG created. At first, Beanstalk will launch one of its instances just to make sure that one works. And if it works and passes the health checks, it’s going to launch all the remaining ones. So right now, in three instances when it’s happy, it’s going to sort of merge the ASG with a temporary SG. So it’s going to move all the temporary ASG instances to the current ASG. So now in the current ASG, we have six instances, okay? And when all of this is done and the temporary ASG is empty, then we have the current ASG that will terminate all the V1 applications while the V2 applications are still there.
And then finally, the temporary ASG will just be removed. Finally, there’s something you may hear in the exam or in the white papers. It’s called blue-green. And it’s not a direct feature of Elastic Beanstalk, but I’ll try to give you my best version of it. It’s basically a zero-down time, and it helps with the release facility, allows for more testing, et cetera, et cetera.And so the idea is that you want to deploy a new stage environment, so it’s just another elastic beanstalk environment, and you’ll deploy your new V2 there. So before, all the deployment strategies were within the same environment. Here, we create a new environment. As a result, the new environment, whether staged or green, can be independently validated in our own time and then rolled back if problems arise. And then we can use something like Route 53, for example, to prevent the traffic from going in both directions. So we can set up weighted policies and redirect a little bit of traffic to the stage environment so we can test everything.
And then, when we’re happy using the Elastic Beanstalk console, you can swap URLs when done with a test environment. So this is not a very direct feature, and it’s actually very manual to do. It’s not like it’s embedded in an elastic beanstalk. So some documentation will say there’s BlueGreen, some will say it’s not there, but overall it’s very manual. So, just one graph, I’m trying to keep it simple, but in the blue environment, we have all the V1, and then we’ll deploy a green environment with all the V2, okay? And they’re both working perfectly fine at the same time. And then in Route 53, we’re going to set up a weighted type of policy to send 90% of the traffic to Blue. So just keep most of the traffic going to the instances we know work, and maybe only 10% of the traffic to the green environment just to test it out and make sure it’s working and the users aren’t having any problems. And so the web traffic basically gets split 90/10, but it’s whatever you want as far as the weight goes. So, once you’re satisfied with your testing, when you’ve measured everything you want with your V2 environment and you think you’ve nailed it, you basically shut down the blue environment and sweep the URL to make the green the primary environment. So that’s it for blue and green, right? And it’s pretty complicated and, I think, pretty manual, and I’ll take beanstalk, but that’s the way it is now from the aviation documentation; sometimes it’s really good and we get a little summary. So this is the link.
If you look into it, it’s really, really good, actually. You should read it, just like the page. And so there’s this table, which is quite nice and kind of summarises all the deployment options. As a result, you have all of them running at the same time, with additional batches immutable. So we’ve been doing all of this in depth, as well as blue-green. And so it basically tells you what happens if there’s a failed deployment. What’s the deployment time? Is there zero downtime or not? Is there a DNS change? What’s the rollback process, and where does the code get deployed? So this table should make a tonne of sense to you if my diagrams made sense to you as well, right? But now you should really understand all the differences between the deployment methods. They’re very important. And the exam asks you a lot of questions about which is better, depending on the use case and the requirements. So I hope that was helpful. You are now an elastic beanstalk deployment expert. I’ll see you at the next lecture.
Okay, so now let’s go into practice for these rolling updates. So let’s go into configuration, and we will go and scroll down to rolling updates and deployments and modify. So you have two kinds of rolling updates and deployments. We have application deployments and configuration updates. So these are very different. So let’s talk about application deployments. And this is what you saw in the theory lecture you just had. So the deployment policy can be all-at-once rolling, rolling with additional batching, or immutable. So “all at once” means all the instances will be updated at once. This is the quickest route, but they will have some downtime.
You can have rolling That means that only 30% of the fleet will be down at any given time, updated, and then restored, and so on. Or we can have rolling payments in terms of fixed amounts. So only one instance or two instances at a time will be down. So this is great, but we reduce our capacity without incurring any additional cost. We’re rolling out an additional batch; we have the same kind of settings, so percentage or fixed, and we create new instances to perform the updates on. So we’ll have the entire fleet available at the time, so capacity will not be affected, but we’ll incur some additional costs because we’ll be launching two new EC instances. And finally, with immutable, we are creating new instances together and adding them to our Otiscaling group. So this is a bit more hands-on involved.
There are no in-place updates, so it’s immutable, hence the name. And this will obviously be a more costly option because we’ll have to create new instances for it. But this could be a really good way of doing deployments. In all cases, this will still use the same load balancer and the same auto-scaling group. So in that regard, all these updates really don’t change our infrastructure. We still have the same ELB and the same ASG. Now for configuration updates, this is about changes to virtual machine settings and the VPC configuration trigger. So whenever we change our VMs or whenever we change VPC, Beanstalk will need to replace our instances, okay? And it needs to know how it can replace our instances without downtime. So we can do a roll based on health, a roll based on time, or an immutable roll to just create new instances altogether. So this is a way for us to choose. But just so you know, you have some similarities in how the deployment is done when you update your virtual machine settings or your VPC configuration trigger.
Okay? So Immutable will create new instances; enrolling will take an instance down, reconfigure it, and then put it back up. Now, finally, you can update some deployment preferences and ignore health checks if you want to. That, I think, is pretty risky, but why not? The healthy threshold is okay, the warning is degraded or severe, and the command timeout has to be stopped to allow an instance to complete deployment commands. Okay, so this is good. Now, why don’t we go ahead and just use an immutable deployment policy just for fun? And then we’ll apply it, and we’ll be done. So now our configuration changes have been saved, and what I’m going to do is go into my index of HTML and I’m going to call this Hello World Version 3. Okay? And I’m going to do an EB deployment. So let’s log out of this and do an EB deployment and see what happens. So we are creating a new application version, and the environment is not ready yet. So we need to wait for this to happen because we’re still updating the environment settings. So let’s wait a minute. Okay, so our environment is ready. So let’s go and do an EB deployment.
This time, this should work. So we’ll do EB deploy after the upload is finished. And now the environment update is starting. So we are doing an immutable type of instance update. So if we go to EC2, we should start seeing an instance being created. So let’s go to autoscaling groups and launch configurations. So in the auto-scaling group, here we go. We have the one that we had before, and in the launch configuration, a new one has just been created. So this is the new launch configuration that has been created. So we’ll wait a little bit, and hopefully, yes, we have a second auto-scaling group that has been created. This is our temporary auto-scaling group. Okay. and it will create one instance. So this instance is pending. So if I go to my EC2 instances, we can see that this instance is being created. So remember, it is part of a new auto-scaling group. But when it is deployed, it’s going to move this instance into my other auto-scaling group. So let’s wait a little bit to see what happens and if we can go back into this detailed account and refresh this. And as we can see, it says “created a temporary auto-scaling group” here.
So this is definitely a temporary auto-scaling group. So let’s go ahead and see what happens. This instance has been created, and now the deployment should happen. So I’ll just pause the video and get back when something important to happen. So here we go. On one instance, application updates are being performed. So only one instance has been completed. So let’s go ahead and wait a little bit. And here we are if I go to my environment. So the instance has been deployed, and now it’s been added to the load balancer, which is waiting for the instance to pass the health checks. So if I refresh this page right now, I see V 3. So it progressed from V 2 to V 3. So that means that my load balancer has registered my new target, and in there it should wait a little bit and probably deregister my other instance, so back into my Otis coming groups it goes. As we can see, it’s in service, it’s healthy, and so on. So I’ll have to wait a little longer. What we should see is that our Otis Scaling group deregisters and then terminates another instance. And then, at the end, we should only have one Otiscaling group and only one instance.
So it takes a little bit of time. So I’ll pause the video now. As a result, we can see from the log that the instance was moved from the temporary autoscaling group to the permanent Otoskeleton group. So if I go into the ASG here now, we should see that, yes, the instance has been moved from the temporary Auto Scaling group into the permanent Otoskelling group. So this is great. This has worked, as evidenced by Hello World v. 3. So everything is fine, and now we wait for the post-deployment configuration to finish. But all in all, this immutable deployment has worked, and we have created a whole new instance to deploy our application onto. There were no in-place updates happening to our instances. So immutable takes a lot of time, but it is probably the safest and most expensive type of deployment. And I’m happy to have just shown this to you, and I definitely recommend you give it a go with other types of updates. So we’ve tried immutable, but you can try rolling updates. You can try rolling with an additional batch just to see how things work on your own and witness those. But all in all, I hope you like this lecture, and I will see you in the next lecture.
Okay, so there’s one type of deployment we haven’t seen yet, which is the blue-green deployment in Elastic Beanstalk. So let’s take a look at how it works. So we have a development environment and a URL for it. And I’m going to open this right now. It says, “Hello, worldview.” Three content changes So what if you wanted to duplicate this environment and create another one? We’ll call it a test environment. Okay, so we’ll modify this index architecture. I’ll call it Hello? world v. four And then I need to create a new environment so I can create this, use this command, and I’ll run it.
So I’ll set up the test environment, and the configuration will be a previously saved production configuration. So it’s going to create an environment that’s been reproduced, a saved configuration. So it’s a great way to see this as well. So here we go. The environment is being created, and it’s starting. So let’s go in here and let’s refresh this page. And as you can see, the test environment is now being created, and it’s going to be configured using the saved configuration product, which is quite nice. So I’ll wait a little bit until all of this is done, and then we can get to continuing with this lecture. Okay, so my environment has now been created, and if I go to my test URL, I can see that it says Hello World Version 4. “Hello, World V,” my development environment says. And my test environment welcomes you: World V 4. So, for example, imagine that this is our current environment that people are actually actively using. And this is an environment we want to test. And we’re testing this environment, and we’re very happy with the Hello World V4 interface.
So we’d like to be able to swap these two environments so that people start using the test environments. So one way we could do this in Elastic Beanstalk is to use the swap environment URL feature. So using this feature, we’re going to swap an environment name for another one, and it’s going to just change the URL. The way it works is that it will modify the route 53 DNS configuration, which may take a few minutes. As a result, while the TTL is active, people may see both versions for a short period of time. So, instead of a uniform change, there will be a DNS change. As a result, they may take some time to spread for some people. So I click on Swap, and here we go. The swap is happening. So now what I should do is just refresh this page, and now it says Hello World v. 4. And if I refresh this page, it now says hello. V Four. So it’s still saying V four. But in a few seconds, hopefully, when I refresh it, it’s going to say “Hello, Version Three.” And so the reason we’re still seeing V4R here is due to the DNS changes. So let’s wait a little bit until the DNS changes take place. And so a good way to bypass TTL, because right now it still says before, is to open an incognito tab and type the right URL, not this one.
So I’ll type in the test URL here, and hopefully this will work. So let’s try it out. Here we go, paste. And here we go. Now it says, “Hello, V. Three Modified Content.” So to bypass the TTL, we just open an incognito window, and this will resolve the new CNAME, and we get the Hello V3 message. So the CNAME swap feature worked. And if you click on a dev environment, for example, you can see that it says “completed CNAME swap for Dev and Test.” And if you go to the test environment, it should say the exact same thing. Okay, so that’s a way of doing swaps. So we can swap these two environments. and that’s a really handy feature. And another thing we could do instead is create a Rail 53 record that points to both environments, maybe using a weighted record. And we could send a small amount of traffic from development to testing, perhaps 10% of the total, and gradually increase the amount of traffic sent to testing so that, at the end of the day, we have 0% on development and 100% on testing. When we’re ready, we’ll simply switch the traffic from Dev to Test. So that is another way of doing a more smooth, blue-green type of deployment. So that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.
Popular posts
Recent Posts