Training Video Course

AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01)

PDFs and exam guides are not so efficient, right? Prepare for your Amazon examination with our training course. The AWS DevOps Engineer Professional course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Amazon certification exam. Pass the Amazon AWS DevOps Engineer Professional test with flying colors.

Rating
4.53rating
Students
122
Duration
20:29:00 h
$16.49
$14.99

Curriculum for AWS DevOps Engineer Professional Certification Video Course

Name of Video Time
Play Video: CICD Overview
1. CICD Overview
5:00
Play Video: CodeCommit - Overview
2. CodeCommit - Overview
2:00
Play Video: CodeCommit - First Repo & HTTPS config
3. CodeCommit - First Repo & HTTPS config
5:00
Play Video: CodeCommit - clone, add, commit, push
4. CodeCommit - clone, add, commit, push
3:00
Name of Video Time
Play Video: CloudFormation Overview
1. CloudFormation Overview
7:00
Play Video: CloudFormation Create Stack Hands On
2. CloudFormation Create Stack Hands On
9:00
Play Video: CloudFormation Update and Delete Stack
3. CloudFormation Update and Delete Stack
8:00
Name of Video Time
Play Video: CloudTrail - Overview
1. CloudTrail - Overview
9:00
Play Video: CloudTrail - Log Integrity
2. CloudTrail - Log Integrity
4:00
Play Video: CloudTrail - Cross Account Logging
3. CloudTrail - Cross Account Logging
3:00

Amazon AWS DevOps Engineer Professional Exam Dumps, Practice Test Questions

100% Latest & Updated Amazon AWS DevOps Engineer Professional Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Amazon AWS DevOps Engineer Professional  Premium File
$43.99
$39.99

AWS DevOps Engineer Professional Premium File

  • Premium File: 208 Questions & Answers. Last update: Feb 18, 2025
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS DevOps Engineer Professional Premium File

Amazon AWS DevOps Engineer Professional  Premium File
  • Premium File: 208 Questions & Answers. Last update: Feb 18, 2025
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$43.99
$39.99
Amazon AWS DevOps Engineer Professional  Training Course
$16.49
$14.99

AWS DevOps Engineer Professional Training Course

  • Training Course: 207 Video Lectures
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS DevOps Engineer Professional Training Course

Amazon AWS DevOps Engineer Professional  Training Course
  • Training Course: 207 Video Lectures
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$16.49
$14.99
Amazon AWS DevOps Engineer Professional  Study Guide
$16.49
$14.99

AWS DevOps Engineer Professional Study Guide

  • Study Guide: 476 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS DevOps Engineer Professional Study Guide

Amazon AWS DevOps Engineer Professional  Study Guide
  • Study Guide: 476 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$16.49
$14.99

Free AWS DevOps Engineer Professional Exam Questions & AWS DevOps Engineer Professional Dumps

File Name Size Votes
File Name
amazon.braindumps.aws devops engineer professional.v2023-04-03.by.eliska.334q.vce
Size
2.22 MB
Votes
1
File Name
amazon.pass4sureexam.aws devops engineer professional.v2021-10-05.by.blake.328q.vce
Size
1.24 MB
Votes
1
File Name
amazon.examlabs.aws devops engineer professional.v2021-04-16.by.aaron.321q.vce
Size
1.6 MB
Votes
2
File Name
amazon.actualtests.aws devops engineer professional.v2021-03-03.by.lincoln.236q.vce
Size
901.51 KB
Votes
2

Amazon AWS DevOps Engineer Professional Training Course

Want verified and proven knowledge for AWS DevOps Engineer - Professional (DOP-C01)? Believe it's easy when you have ExamSnap's AWS DevOps Engineer - Professional (DOP-C01) certification video training course by your side which along with our Amazon AWS DevOps Engineer Professional Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.

SDLC Automation (Domain 1)

32. CodePipeline - Manual Approval Steps

Okay, so we have our pipeline and we do source, we test it, we deploy it to a development environment, and maybe after a while we want to deploy it to production environments. So why don't we edit our pipeline and add a stage at the very end? And the stage is called "Deploy to Prod." And we'll add this stage right here. So, in Deploy to Prod, we obviously want to deploy to Prod, and the action provider will be code pipeline could deploy. And we'll use the input artefacts from the test results, so the application name will be called Deploy Demo. Accept this fact.

The deployment group will be production instances. so excellent. We click on done. And if we left it this way, after a successful deployment in Dev, the deployment to Prod would take place immediately. Maybe this is a behaviour we won't change, but maybe we want to be a little bit more careful. And we actually want to have someone do user acceptance testing in the development environment, to ensure that the changes are okay and approved before we deploy the product. So, as such, we'll add an action group just before, and I'll call this one Manual Approval. And the provider of action will be a manual approval. So here we can configure an approval request and we can specify a topic. For example, if you wanted to send notification that manual approval is needed, there's a URL for review.

For example, if you wanted to provide a URL for reviewing the code. So maybe you want to say, "Okay, look at this URL and you'll see the application is to your liking." And if so, yes. Go ahead and set it. When you're ready, please approve Deploy to Production and click on Done. OK, so now we have in this stage two action groups and they are sequential. Here we have the Manual Approval and when it's done, we go to Deploy to Prod for AWS Code Deploy. So here we have stages. They were parallel, they were happening at the same time, but here they're sequential. And so, as such, the manual approval has to happen first before we use AWS Code Deployment Prod. So let's click on Done and let's click on Save and Save and let's test this new pipeline. So I'll click on release. Change release. And now the pipeline will go through the entire process. So I'll wait until we get to the stage where I need to manually approve things. Okay, so my code deployment happened and the Apple GS Three succeeded. And now we are in the Deploy to Prod stage and we are waiting for a manual approval to happen before the instances are deployed to Prod. So if we have a look, this is myDev instance and it says Congratulations V Seven.

And if we go to our EC2 instance and find a production server, for example, this one for now, it says Congratulations V Five. So, instead of reviewing the manual approval, why don't I just say, "Yes, the changes look good, and approve them now that they've been approved?" As a result, the Deploy to Production should occur and could begin. So, let's see, wait a second. Here we go. The AWS code deployment to prod stage will happen in our deployment group. And the deployment group should be my production instances. And so, as of now, one of the three instances has been updated. And remember, we decided to update them one at a time. So that will take a little bit of time. But if I refresh this page, I now see congratulations, v seven. The application was deployed using AWS Code Deploy. So everything worked just fine. And the deployment has succeeded. That is really cool. I think it's a really nice way of chaining up deployments and seeing how. So let's get back to the code pipeline here. We have just observed how manual approvals work and how they're very important in making sure that we deploy to Prod only when we want to. So this is part of continuous delivery. All right, that's it. I will see you in the next lecture.

33. CodePipeline - CloudWatch Events Integration

Okay, now you're going to get used to it. But we still have to look at the integration between the code pipeline and cloud watch events. So let's go back to the cloud to watch events. The first thing that I want you to remember is that code pipeline did create a rule for code commitas a cloud watch event, such as when a change occurred in the commit source repository, then we would have a cut by plan demo being triggered automatically. We could also create a rule. For example, we could create a schedule and say every five days or every one day, for example, once a day. Then we add a target, and a target would be a good pipeline, and we would have our pipeline being invoked every day. Why not? But this is something where we could have the pipeline be invoked on a schedule, okay, but more importantly, we could create rules and we could look at the service code pipeline itself to see the kinds of events that can happen in it. So we could select all of the event types, which would result in a very simple event pattern, but we could also select other types of events.

So the first one is the code pipelineexecution state change and we could choose any state or say okay, we want to have cancelled, failed, resumed, started, succeeded, and superseded. So if we selected, for example, failed, we could have this entire pipeline say, okay, when you fail. So when there is a failure in the code pipeline, maybe I want to invoke a lambda function and that lambdafunction will send a notification into a small channel. That could be a way. Or if we have any state, we could say, okay, anything that happens in this codepipeline should be sent to slack. We can also have a stage execution change. Again, we still have the different states for each stage or the action execution stage, which are the four as well from here. And finally, we could have all the events go into a lambda function. So with the code pipeline, we're able to say, okay, if something happens or if something goes wrong at the pipeline stage or action level, then do something about it and choose a target and do whatever you want with it. So it could be an SNS topic for email notifications. It could be a lambda function if you want to integrate with a third party. It could be a Chinese stream if you want to build a real-time dashboard and so on.

Or it could be a fire hose delivery stream if you wanted to maybe have a history of all the things that happen in the code pipeline delivered to S3. For example, who knows, you have lots of different opportunities to create cloud watch events, sources, and targets. But what I want you to remember is that in the code pipeline The more we add, the more detailed this event pattern becomes. And the exam will expect you to understand how to read this kind of event pattern. So have a look at it. It's very simple. It's just a JSON document. But here he's saying the source is the code pipeline. The detailed type is the pipeline execution state change. And the detail must be that the state must be in a failed state. It's fairly simple, but still good to know. And always take a look at what a sample event may look like. Although this one doesn't have a sample event for the current event pattern, maybe for any it will have one. Here we go. This is what a sample event could look like. It's also really interesting to look at the information that is contained within a sample event because this information is what you could use in your lambda function, for example. Okay, so that's it for this lecture. I hope you liked it and I will see you at the next lecture.

34. CodePipeline - Stage Actions, Sequential & Parallel

Okay, so in the code pipeline, we've seen that we can have a deployment stage. And within each stage, we have stage actions and stage action groups. So if we go and edit a pipeline and look at this, this one has only one stage. This one is only one stage. This one has one stage group. And these two stages are on the same level. So they are parallel stages. And so they will happen at the very same time. And in the one at the bottom, we have sequential stages and sequential action. Sorry. And sequential action is that the manual approval needs to happen first.

And then when the manual approval is done, we will deploy to prod. So you can start thinking about the fact that we can have a lot of different types of parallel and sequential. So I'll just create a dummystage here just to show you. So let me add an action group and I'll choose a manual approval and then click on done. Okay, So I can have many different manuals in parallel if I want to. So we need to change the action name. So I'll have two A's and so on. So I can have many different manual approvals in parallel. I can have many different approvals as well, done sequentially. So I'll have three As this time. Here we go. And so we can start. This is just a notation. It doesn't have to be obvious manual approval, but it's quick for me to create those. But here we can have a lot of different flexibility as to the sequentiality and parallelism between each different action group. So, as we can see here, this action group occurs first and there is one action in it. Then, sequentially, this new action group will happen second.

And these two actions will happen in parallel. And then finally, when both of them are done, sequentially, we'll get back to this one. So we have the option to mix and match between sequential and parallel actions. And so, this is fine from the console, but you also need to know what it looks like from code. And from code, it's called run order. So if you define your action, your pipeline has code. There will be a running order for each of these actions within the stage. And so the run order is an integer and the default value is one. And so, a higher run order means that the action will happen afterwards. And if two actions have the same run order, then they will happen in parallel. So if we go back to the quote pipeline and look at these actions, this one may have a run order of one.

So it's the first to happen, this one. And this one will have a run order of two because they happen at the very same time and they happen after the first one. And this one may have a run order of three. So this is how you define encode in order to say, okay, this stage is are sequential, but these stages are parallel. So this is something you need to remember before going to the exam. You can have sequential and parallel stages. And the way to define those is to use the run order parameter. Okay, that's it for this lecture. very short, but I wanted to show you this. I'll go ahead and just delete this. I don't need this dummy pipe line. I wanted to show you how that worked. And I'll just cancel my changes overall. That's it. I will see you in the next lecture.

35. CodePipeline - All Integrations

So now we need to talk about all the use cases of what you can do with the code pipeline. And as you can see, there are a lot of different services we can use. So there is this page called Best Practices and Use Cases, and I do recommend you use it. And they do give you some examples of how to use the code pipeline. For example, code pipeline with S3 code, codesubmit, and code deploy or code by third parties such as action providers such as GitHub Jenkins could start to code, build, and testcode ECS, Elastic Beanstalk, AWS Lambda, and Cloud Formation.

So there are a lot of different use cases for code by plan. And I do recommend you just read through them and understand how they work and what they're trying to do at a very high level. So if we go back to code pipeline now and to editour pipeline, I just want to show you all the various actions we can have in there and just talk to them at a high level so that we can understand exactly what we could do with code pipeline. As a result, all possible use cases So the first one is manual approval. We've seen this. This is when we want to have some kind of review before we deploy into production. Code building is when we build or test our code. So when we test our code, we don't produce any artefacts, but when we build our code, we can produce artefacts and pass them on to the next stage. For instance, code pipeline and code deploy. Jenkins will see this in this entire section. This is a way to integrate with a third-party building service. Now, for the deployment options, we have confirmation that it is possible to deploy an entire cloud formation template using a code pipeline.

Because cloud formation is infrastructure as code, it is quite common for people to store that code in code commit, update the cloud formation template in code commit, possibly validate it using codebuild, and finally deploy it using code pipeline and the cloud formation integration. So it is definitely a very common pattern that could be deployed. We've seen this at great length. So we'll be good Elastic Beanstalk if we want to deploy our application on an ElasticBeanstalk target and have automated rolling upgrades, for example, on its service catalog. At the end of the day, a service catalogue is a cloud formation template, which we will discuss later in this course. ECS Alexa Skills If you wanted to deploy Docker containers to ECS, we can easily do so with blue-green deployments. And for blue green deployments, we'll use code deploy on the back end or deploy to Amazon S Three. And we've seen this in this course where we, for example, can upload a zip archive artefact intoan S three buckets in our account. In another region, For example, we could invoke an AWS lambdafunction, and I'll be talking about this in greater detail in the next lecture. We can obtain code from code commit, ECR for docker containers, SThree for zip archives, and GitHub for code.

So we would use code commit S3 and GitHub. When it comes to integrating code and building it, maybe we'll use ECR if you want to have a continuous delivery pipeline that has ECR as the source and ECS as the deployment target. So that's definitely something that's possible. By the way, if you do specify multiple sources, every time the pipeline is run, each source will be refreshed and the code will be pulled again. Now for testing, we can test with a code build. We've seen this in this section. We can have a Device Farm where we test applications on many actual physical devices. This is called the AWS Device Farm. So, this is good when we want to do some kind of load testing and so on. Jenkins also has a testing feature called Blaze Meter, and ghost UI inspector and run scope API monitoring.

These are more external services, but they allow you to run different kinds of testing, such as load testing, UI testing, monitoring, and so on. So you have a lot of possibilities with the code pipeline, obviously, and this is definitely a very handy use case. So one thing I want you to notice is that there's a lambda and with lambda we can run anything we want. And I'll be talking about it in the very next lecture. But now we have all of the possible code pipeline options. I recommend that you go through the use cases, and I have the link in the resources, to understand how to use code pipeline with different use cases, such as with ECS for continuous delivery of container-based applications to the cloud. And I will see you in the next lecture.

36. CodePipeline - Custom Action Jobs with AWS Lambda

So there's a way to integratecode pipeline with AWS Lambda. And AWS Lambda will allow us to perform any kind of custom actions that we want, and this will allow us to extend our pipeline to do anything we want in general. So here is an example from the documentation. It's called invoke, and a ladder function in the pipeline is in the code pipeline. So it gives you some kind of use case for lambda functions in the code pipeline. For example, to roll out changes to your environment by applying or updating AWS cloud formation templates. Or to create resources on demand in one stage of a pipeline using Cloud Formation and then delete them in another stage. For example, this is really helpful if you want to do some load testing. Or if we deploy to elastic beanstalk and want to do a zero-dime time deploy. We could use a lambda function to swap the values in the scene for us. Or we could deploy to Amazon ECS docker instances or backup resources before building or deploying by creating an AMI snapshot. Or finally, add integration with third party products to your pipeline, such as posting messages to an IRC client or a slack channel. So this is a lot of examples and we'll just follow the tutorial in there. There are two tutorials. There is a simple lens of function that checks the URL and a more complex function that does something with Cloud Formation.

Overall, we'll just show you the very simple tutorial here to just check a URL. So we have to create a lambda function. So let's go to the lambda service and we'll create a function for our code pipeline. Okay? So, in lambda I will go ahead and create a function and I will call that function a lambda code pipeline. The runtime will be Node JS ten times. And for permissions wise, we'll create a new role with basic lambda permission and we'll edit that role later on. Okay? We'll create the function, and we're done. Now the next thing we have to do is to add an execution role and the policy needs to be a bit more than the basic lambda execution role. So we need to have access to the logs, obviously, but we also need to have access to actions on the code pipeline. And these two actions are: putting success results first and putting failure results second. And the reason we need these two things is that we need to be able to tell back to code pipeline after our lambda function has succeeded or failed, whether the job itself should be a success or a failure. So let's go back to lambda and let's get to I'm actually, and let's go to roles. We'll refresh the roles, and there must be an a role that was created for our lambda function. So, lambda code pipeline role, here we go. And we'll create a new policy. So we'll attach a policy. We'll add an inline policy.

Even better, it will be JSON, and we'll copy this entire document right here, review policy. And we'll say access to code pipeline, custom actions to say success or failure. Here we go. It's pretty clear about what it does. Okay, so now our lambda function can have a basic execution role and also has access to code pipeline custom actions. So back in our lambda function, we should add a trigger to it. But no, we'll do this later on in the code pipeline. So we should just edit the code. The code is very simple. It is in here. So we'll just copy this entire thing and we'll look at the code in a second. So here it is and here we go. So this is our function and it uses the A SDK when we do the code, the call to code pipeline. And so here we have a code pipeline as an a SDK API call and we retrieve the job ID and we retrieve a URL from that job ID. So the URL will be a user parameter that we passed in and this function will test the URL and make sure that the URL is working, returning a 200 before we return the result back to copy time, saying yes, things worked. So there's a put job success function and this put job success function will call theput job success result API call. So you need to remember this one. And the put job failure function will call the put job failure result function. Here we look at the URL and we make sure that the URL does have http in it. And then finally, we can have your age page to do a get page.

And at the very, very bottom, we look if the status code was 200. If it was, and the return page contains the word congratulations, then we put job success test passed. Otherwise we'll have a job failure. So, a fairly complicated function but a really well-rounded one. And so we're going to test it in a second. So this has been saved. And now let's go to code pipeline and we're going to edit our pipeline and we're going to have a test right after the deploy. And we'll have test web pages, and this will be a new stage here. And we'll have an action group, and I'll call it Lambda Test EC Two. And the action provider will be lambda, obviously. So here it is, lambda. And the region is good, the input artefacts are fine. The function name is lambda code pipeline, and the user parameter will need to pass in a URL. So if we go into EC2 and find one of our EC2 instances that we deploy, for example, is this one a development instance? Yes, it is. So we'll paste in this URL right here. We'll see. You need to test this URL http this URL and make sure that it has the word Congratulations. And I put artefacts lambda test one, clicked on Done, and here we go.

We have our first lambda test, but we could definitely have many of those. So I'll say lambda test two and the action provider will be yet again lambda and we can have the same input artefacts and the function name will be code pipeline and the user parameter. This is obviously going to fail because Google and Club do not have the word congratulations in it. And then I put in artifacts. I'll call this lambda test two. Click on Done and here we go. We are testing two web pages with lambda, but we could obviously test a lot more. And so this is our pipeline. We're going to save this save and now we're going to test whether or not it works. So let's just run this pipeline once. So we click on release and wait for everything to happen. So the source, then the test, the deploy, and then test web pages. So let's wait a second. And so the pipeline has run and we got to the test web pages stage, and here we had the first lambda function that succeeded, and we can look at the details of it and we can have a look at all the logs for it. So here's the one and the other one failed, and we can look at the details as well and understand that okay, actually we're expecting to have the work congratulations.

And it wasn't; it was false instead of true. And we can link to the execution details and have a look at the cloud watch logs again and see what went wrong in them. And it says OK, we wanted to have a 200 but we did not get a 200 or something like this. So, overall, lambda is very powerful because it allows us to write so many different types of tests, integrations, and so on. All that you have to remember, though, is that the lambda needs to go back to the code pipeline and use the Put Job success result API or the Put Job failure result API. And in this API, you need to look at it to see how that works. So to use the put job success results, you need to join something called a continuation token. And so, this continuation token is a token generated by a job worker such as code deploy or code pipeline. And so it is a way for us to identify a job's success, a job is successful. So, when you use the Put success results API, put Job Success Result API, you must return this continuation token to indicate that this job has a continuation token. XYZ used this one and said it succeeded or it failed and so on. But here it was a really cool way of integrating a lambda with a code pipeline. And remember the name of the API to put the job Success result. We need to add a continuation token for it to work. All right, that's it for this lecture. I will see you in the next lecture.

Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap AWS DevOps Engineer - Professional (DOP-C01) certification video training course that goes in line with the corresponding Amazon AWS DevOps Engineer Professional exam dumps, study guide, and practice test questions & answers.

Comments (0)

Add Comment

Please post your comments about AWS DevOps Engineer Professional Exams. Don't share your email address asking for AWS DevOps Engineer Professional braindumps or AWS DevOps Engineer Professional exam pdf files.

Add Comment

Only Registered Members can View Training Courses

Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.

  • Trusted by 1.2M IT Certification Candidates Every Month
  • Hundreds Hours of Videos
  • Instant download After Registration

Already Member? Click here to Login

A confirmation link will be sent to this email address to verify your login

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.