CompTIA Cloud+ CV0-003 – Domain 1.0 Configuration and Deployment Part 5
When it comes to deploying resources, an area that typically has a lot of confusion Tied to it is: how much memory do I really need? Once again, it’s fairly well known that typically having more memory is a better option than having too little memory, especially when it comes to specific types of applications. So let’s talk about when we store memory in the cloud. Some of the areas you’ll want to know about and be aware of for the Cloud Plus Exam You can certainly expect a couple of questions on the exam testing and validating your knowledge of cloud memory requirements.
When it comes to sizing your VM memory, be aware that RAM is expensive, so use it judiciously. There are going to be different configurations in the machine images. For example, you could choose a small configuration with, say, only a few gigs of memory, followed by a medium configuration with slightly more memory, and so on and so forth. You’ll need to be aware of that. This is actually a very common question I get: what if we don’t need all that CPU but we actually need more memory, or if we would like to change the configuration? Well, you certainly can do that with a lot of the cloud vendors, where they’ll allow you to customize the amount of memory. Google Cloud is very friendly when it comes to allowing consumers to adjust their machine images, performance requirements, and memory optimization. These are some other areas you want to look at.
For example, if you have too little memory, then what can happen is what? Your application may not process data the way you would expect it. It may have a negative impact on the user’s experience. It could also result in a loss of business or a loss of transactions. So be aware of those areas. What are the types of memories now? When it comes to memory selection, what I have typically found is that the cloud providers have a really good understanding of what the customer requires, needs, and is looking for. So, chances are, if you’re using RAM, it will meet the majority of your workload requirements. But on the other hand, maybe RAM is too expensive and you want to use a hard disc or virtual memory approach, or maybe you want to use cache for specific types of processing.
You can do that. When it comes to memory selection, cloud providers, in my opinion, are very easy to work with. Let’s go ahead and talk about each of these. Random-access memory is considered RAM. You can access the memory cell directly if you know the row and column that intersect at that cell. Again, you don’t need to know the technical merits of what Ram is. Be aware of what Ram is, but you don’t have to define it. I do want you to be aware, for example, that this is a volatile memory. In other words, it’s not persistent. If you reboot that VM, the data that is in RAM is typically lost. You don’t need to worry about serialRam in this exam.
So we’ll proceed on cash. Cash now typically has multiple layers. We’re going to typically focus mainly on layer one, but I want you to be aware of layer two and layer three as well. So cache technology is, again, a faster but smaller memory type that essentially accelerates larger memory types. Again, caching is typically used to help you boot up faster, process something faster, or ingest data faster. It’s much faster than simply storing the data on a hard disk. So it’s cost-effective in that sense. So that’s the one thing I want you to be aware of when it comes to other memory technologies. We have what’s called “memory bursting,” “memory ballooning,” and “memory over commitment ratios” on the exam. I definitely saw one of these. I can’t tell you exactly which one, but I want you to know what they are so that when you do see a question, you have a good idea of where to go with it. Memory bursting, or “burst mode,” is usually implemented by allowing a device to seize control of the bus and not permitting other devices to interrupt. Basically, I like to compare this to the quality of service in my memory. This is saying, “I’m going to get this process done, I’m going to guarantee it gets done, and I’m not going to let anyone else use that available resource until I’m done.” Burst modes are automatically fetching the next memory context’s contents before they are requested as well. So memory bursting is essentially burst mode. That is a popular feature that you may see as well. Memory ballooning. Now, I know memory; let’s just go back here. So your memory is bursting, right? And ballooning sounds sort of similar, right? So ballooning is really where you’re reclaiming memory from the virtual machine, whereas typically you go back to memory bursting. This is actually typically a feature that is part of that architecture, whereas this is more focused on virtual machines. This is also a feature specificallylike on ESXi as well.
So ballooning is where you’re essentially having the hypervisor reclaim the memory from the virtual machine. This is a process you can run manually or automatically as well. When it comes to over commitment, most hypervisors definitely support this feature. Memory overcommitment is a hypervisor feature that allows a VM to use more space than the host has. This is essentially over provisioning. hard drive storage memory Now, again, people use hard discs to store data. People use SSDs to store data. I think we’re familiar with the difference between a hard disc and an SSD. I’m not here to teach you the basics of each of these, per se. Again, cloud essentials are really where it’s at if you really need to understand the differences. But let me just take a minute. I just want to reinforce one point here. Generally, my recommendation for customers that want to have some level of service or process specific types of data should generally use SSDs. On the other hand, if the data is just being stored there for compliance reasons or because it isn’t critically important and isn’t used in production SQL instances or something, then a hard disc is perfectly fine. The cost differential could be ten times for some of the cloud providers.
So just be aware that this is a big cost to your organisation if you store everything on SSD, so be cautious. I guess the term I’m looking for is prudent, but it could also be conservative but liberal at the same time, because there are definitely use cases that I don’t think were SSDs, but there are also use cases that weren’t solely hard disc usage. Once again, when you scale your enterprise up, the biggest costs you’re going to have are going to be your VMs or your storage. Scaling Virtual Machines—you’ve seen this slide several times throughout the course. I’m going to keep on repeating it because you’ll likely get tested on scaling memory or CPUs quite a bit. Okay, again, go to Amazon Web Services. In my demo, I walk you through Amazon as well as Google during the course. Just give you an idea of how a template is set up, how you can deploy a VM, et cetera. So it is important to determine and select the right machine image the first time. Hopefully, there are several options for site selection among the cloud providers. My only piece of advice for this exam is to understand the right use case for the right type of memory. Cash is typically used for use cases that require more offboarding or loading, as well as for quick decisions. In some cases, use RAM when money isn’t so much of a concern but you need performance; otherwise, store it on a hard disk. again, to save costs. Performance may not be a critical factor. Again, know the right use case. Here’s an exam tip. know that “memory over commitment,” a hypervisor feature that allows a VM to use more memory space than the host actually has, I again remember calling this “over provisioning” your memory.
Storage provisioning. When you’re provisioning resources in the cloud, whether it’s storage or VMs, you want to understand that there are specific workflows to follow. There will be specific templates you may have to use, or machine images. Once again, just be aware of what they are. I’m not going to go into a dissertation about how you deploy in the cloud. This exam does not really need you to know that. If you want to know more about how to deploy on Google or Amazon, there are specific courses on how to do that. I do have several demos that cover how to deploy VMs and add storage. That, to some extent, covers it. But I just wanted to get you thinking.
To realize that when it comes to provisioning, you need to understand that provisioning is typically part of a good lifecycle management process, whether it’s around infrastructure lifecycle management or other agile processes, around deployments, or whatever you’re looking at. Just realise that you need to manage your resources effectively, and part of that is to provision them effectively. When it comes to storage provisioning, there are two terms that you really want to know. The first is thick provisioning versus thin provisioning. The question is, why would you want to use thick provisioning over thin, or thin over thick? This exam may put your understanding of the distinction between thick and thin to the test.
I didn’t see anything on my exam, but that doesn’t mean that you will or will not receive something about thick or thin provisioning. I want you to appreciate that thick provisioning is where you’re going to pre-allocate resources. This is typically dedicated storage space for a specific host. Typically, thin provisioning is allocated on demand. It generally happens after the host requests the additional storage space. This is typically provided in the form of a pool. One of the things that you want to understand is that traditional allocation is typically thick provisioning, whereas thin provisioning is more flexible and more dynamic than typically thick provisioning. Another area that you want to be aware of is storage encryption. Storage encryption is commonly used for both data at rest and data in flight, and it may also be part of your storage infrastructure requirements. So keep encryption requirements in mind when you’re provisioning storage, deprovisioning storage, or doing anything else that requires encryption. Also, that could be part of the compliance requirements.
You may not be able to delete specific volumes before a specific date. For example, tokenization So tokenization is essentially where you substitute a specific data element. Typically, the sensitive one is with the non-sensitive one. This is called tokenization. Some of the areas that you want to know about storage security configurations when it comes to provisioning are that you may need to set ACLs (access control lists). You may want to use obfuscation. You may also want to look at SAN zoning or additional IAM requirements. Let’s just make sure you understand the term “obfuscation” here. This is the process of making something difficult to decipher or understand. This is typically done in environments that require a high level of security. So, in the government sector, this is quite common in the provisioning of hosts on demand. This is on, right? This is post-allocation. Here is an exam tip. Make sure you know the difference between thick and thin provisions for the exam. Understand why you might want to use thick versus why you might want to use thin. Once again, thick provision is typically used where you need dedicated space. You’re going to allocate the space before it’s really needed. It is, of course, a little bit more wasteful in that sense, but it has its uses, so be aware of that.
When it comes to cloud workloads, it’s important to understand that there are different ways to review, assess, monitor, etc. Your cloud workloads Understanding your Workloads It is critical to understand your workloads per application. What I mean by that is that every workload is going to have different scaling requirements. They’re going to have different performance requirements. Again, sequel will scale differently than a Hadoop cluster or a web server application versus a CRM application. Again, simply be aware that there will be patterns in the workload. Make sure that your cloud is elastic enough to accommodate those workloads. There are going to be typical workloads that could be accommodated in the cloud, whereas there will be other workloads.
I believe this, and I obtained it from Microsoft, but basically, you must determine whether or not these workloads are suitable for the cloud. For example, if it’s a burst workload or a steady workload, Understand if it’s going to be accommodated well in the cloud; if it’s transactional, then perhaps the workload will be more focused around data or will be more focused around transactions that are financial, whatever that is. Just be aware of which apps are appropriate and which are not. Reviewing the vendor’s best practises is one of the best practises to be aware of. Correlate your workload to the vendor. SLAs justify the cost versus performance trade-off. Sometimes paying more doesn’t make sense.
Sometimes paying more makes a lot more sense. Scaling can be orchestrated and automated, which means it won’t be dependent on manual monitoring; let’s say you set up auto scaling. Most cloud vendors offer services that will auto scale, load balance, and address not only performance issues but also disaster recovery and business continuity. We just touched on load balancing again, we just touched on that workloads. Identify your peak loads and scale. Here is a test tip to understand what workloads are generally acceptable for the cloud. Typical mission-critical workloads, for example, would not be a good fit for the cloud, but big data could be a great fit.
When it comes to analyzing cloud workloads, let’s discuss a few areas that you want to know for this exam. When it comes to workload migrations generally of course, they require planning. Migrations are time-consuming as well. Understand that interoperability and portability could be issues as well. You want to consult the cloud vendor for best practices. You want to validate APIs as well. When it comes to portability, this allows you to move one VM from one provider to another, from one VM platform to another platform. Again, you need to check how you would be able to do that. Does the vendor, the cloud provider, support it? They’re usually supported if you’re using an open format, but again, be aware of what is supported. For example, interoperability allows you to move from one cloud provider to another. So if you want to move a VM from GCP to Amazon, you can do that. But once again, there are going to be limitations and configuration issues. You may want to look at the migration pass. When you’re migrating to a cloud provider, it takes significant planning, and you, of course, need to determine the correct migration path.
Generally, you can move from a virtualized format to the cloud. Generally, not always, but generally, that workload can be migrated fairly simply through the use of virtual machine tools in a lot of cases. Once again, we want to make sure that you are aware of some areas around migrating to the cloud. Now. What about legacy? So what about if you’re operating Windows 2000? Oddly enough, somehow—I don’t know how—you could perhaps rehost that application, but again, that probably won’t be supported in the cloud. But you need to check with the provider, and you may need to do a custom image, for example. But if you’re going to go ahead and look at Windows or Linux, most Linux versions are supported, so that’s generally not much of an issue, especially if you want to run a custom version. When it comes to moving to infrastructure as a service, there are basically two approaches. You could rehost the application or refactor. Refactor means that you’re going to make bug fixes or some adjustments to the code platform. as a service. You can modify a hybrid mesh with APIs or go ahead and rebuild it with SAS. Generally, you start over. There’s no easy way to go from Legacy to SAS when it comes to porting. Generally you can port cloud to cloud that’s basically onGoogle to Amazon could go virtual to the cloud.
So that would be like going from a VMware hybrid cloud to Amazon, which could also go PDV as well. This is where you’re going to go from a physical machine to a virtual machine. Generally, you don’t go physically into the cloud. You need to port it over to be virtualized first. There are two types of virtual machine migrations for the exam. Very simply, you want to understand that offline is where you would take a virtual machine and make sure it’s offline before you migrate that virtual machine. Or you could use something like a Vmotion solution and migrate that virtual machine. Perhaps if it’s VMware. With Vmotion, that would benon disruptive to the application. Hopefully, when it comes to online migrations, there are typically stipulations you need to be aware of. With VMware, you need to have the same processors. You need to use a solution like VM Ocean. You want to use that solution generally to balance workloads but also to optimise the workloads as well. very nice solution that VMware has. Generally, I believe, you can scale up to eight virtual machines at once. But you do need a dedicated link as well. But just be aware that there’s a lot of technical validation you need to address before you can do that. Offline migration. Let’s say, for example, you have different processors, the guest has stopped, or you have configuration issues. You may need to perform an offline migration. There are three ways to port application APIs to the cloud. Know what they are. Right. P to V or C to C Be aware that there are generally two ways to port a legacy app to the cloud. This is actually more infrastructure as a service. I probably need to state that better, but it’s refactoring or rehosting.
When we decide to take our corporate infrastructure and extend it to the cloud, there are going to be specific areas we want you to be aware of for your extension into the cloud, whether it’s for cloud bursting, additional performance requirements, or additional services. Whatever the reason is, we want you to be aware of some of the areas of focus you may want to know, and we’ll especially cover the ones that you may see on the exam. When it comes to extending your cloud, just be aware that you can extend your infrastructure to the cloud. Generally, what I see is that most organisations that do this have a private cloud and want to use a public cloud component. This is generally considered a hybrid cloud, especially when you add in Identity Management Federation automation. Orchestration. There are numerous reasons why you might not have enough resources or time to deploy that infrastructure. So using the cloud may actually be a very wise decision from a schedule or cost perspective. when it comes to hybrid cloud. This is a cloud computing environment that uses a mix of on-premise private clouds and third-party public cloud services.
Generally, there’s going to be coordination between the two platforms. For example, with AWS, you could easily tie that in using your own federation services or using services that are provided by third-party providers, or you could also use Amazon’s DNS services or Im services, whatever you’re looking for. Cloud bursting is an application deployment model in which an application runs in a private cloud or data centre and then bursts into the public cloud when demand for cloud computing capacity spikes. This is very common in the retail model and very common in the scientific and educational communities as well. You can see this quite a bit. What are some of the benefits of cloud bursting? Basically, it gives you the ability to have on-demand access to the public cloud. You don’t have significant infrastructure investments. Capex is reduced, and it’s also scalable and elastic, which is a huge benefit for a lot of organizations. When it comes to scaling your services, each provider has different elements. DNS, NTP, Im, whatever it is, just be aware of the scaling limitations. Federation. This is essentially the deployment of management.
This is the deployment and management of multiple external and internal cloud computing services that are used to meet the business’s needs. This is also known as “single sign-on.” Generally, I see a lot of customers tie their active directory to third-party services and then tie it into things like AWS or Google identity and access management. Once again, you’ll need to have some kind of way to validate that cloud resources are used in the right manner. Authentication and authorization are big deals. You need to set it up wisely. Be aware of who should access what and when. Another area to be aware of is service accounts as well. when it comes to federated identity protocols. There are three major protocols used for federated SSO. We have a sample, which is the secure access markup language. Open ID, then open up. SAML is pretty widely used. In the right situation, open up is also used. and then open ID. Open ID is going to be your Facebook login, and you don’t want to use that for your enterprise per se, but it could be used for a customer-focused part of your cloud infrastructure or cloud services you provide to your customers. We’ll talk more about these. So, once again, security, aspiration, and markup language are examples. This is XML. Now, on the exam, I want you to be aware that this is a very common way to exchange authentication and authorization between parties. Also note that there are actually three different roles. We have the principal, the identity provider, and the service provider. With Open Up, this is again used for authorization, not so much for authentication. This again has three roles. You don’t need to know that for the task, but I just want you to be aware of what it is. Open ID. This is the open standard that you see on social media and a lot of other websites use.
Again, if you want to go to the Washington Post and use your Facebook login, they allow you to do that. This is another method of authenticating through a third party. So if you want to use Google, Facebook, or whatever as a third party, you can do that. I would not recommend this for your enterprise per se, but this is great for a customer-facing cloud solution, perhaps LDAP. This is a software protocol that is essentially enabled to locate organizations, individuals, and other resources in a network. It’s a directory service. Simply put, you’re going to typically want to tie in Active Directory or LDAP to the cloud service. You may also want to understand that you may have to extend DNS, DHCP, Certificate Services, load balancers, firewalls, and Ipsids—just some of the services you may want to extend. Open ID is sponsored by the major social media companies as an option for authentication. Once again, I’m not a big fan of Open ID, certainly not for the enterprise. But again, some consumers actually like it. They like to use it. They prefer to use one ID, and that does give you flexibility. Once again, you’ll need to make sure you set this up appropriately. Never, ever use this for your enterprise network. Okay? for the exam. Be aware that sampleopen ID and open authentication exist. You.
Capacity Planning One of the areas around cloud management that you want to be focused on is managing your resources and the capacity of those resources. What you actually require in comparison to what you’re currently using When it comes to capacity management, you’ll need to look at a whole range of different resources. Once again, it’s not just data storage; it’s not just VMs; it’s not just the number of users. There are a lot of other things that go into it. So take a look at the bandwidth requirements. Look at your IP schema, your licencing requirements, and any kind of API issues. One of the things you could do is compare this to baselines. Baselines serve a great purpose. This allows you to identify where you started from and where you are now.
Now this should help identify growth areas as well. when it comes to network capacity. When you exceed that network capacity, be aware that it could affect the performance of your application. Look at different areas to investigate. Look at latency and jitter. Consider your internet provider. One thing I’ve noticed is that all it takes is the network provider to make one small subtle change after another. Or maybe you should update protocols or services that can definitely affect your performance. The user experience is a big deal as well. Once again, users are a good indicator of how well the service is operating. The application stack could be an issue. You should also keep an eye out for bandwidth network monitors. When it comes to network capacity, be aware that there are bandwidth limitations that you may run into. One feature of cloud providers is that they have different pricing tiers. Once you exceed a certain threshold, you’ll get charged different rates. A lot of customers may not be aware of that. When it comes to limitations, talk to your cloud provider. Make sure you understand any kind of elastic IP limit. For example, like with Amazon, With Amazon.
We have what’s called elastic IPS. This is a static IPV4 address designed for cloud computing. Essentially, you can only have one elastic IP that can be associated with an EC2 instance at a time. Limits apply to both VPC and EC accounts. Once again, you need to be aware of these limitations. You can run into limits if you’re not careful. Nat nat is a common source of confusion in the cloud. I think. Once again, Nat is an approach that’s used to help mitigate those IP management challenges that you might run into for the exam. Do you understand? The difference between Nat, for example, and Pat will cover that as well. Pat again for port address translation. Be aware that Pat is essentially Nat. Overloading is also known as this, which is where you use a single outside IP address and map that to multiple inside addresses using different port numbers. The goal of this is to reduce network overhead but also conserve those IP addresses. elastic IP is not the same as an AWS load balancer. AWS load balancing helps you scale out your site by associating many EC2 instances at the same time. Under one Web address, elastic IPS can only be associated with one EC2 instance at a time. You don’t need to know that for the test. I just want to throw in a little bonus for those folks that are AWS aspirants.
Popular posts
Recent Posts