AZ-104 Microsoft Azure Administrator Associate – Configure load balancing part 2
Now I want to show you something that is pretty cool and you might not even know. Now I want to demonstrate to you something to do with load balancers. But I can go into the Azure Portal, create a load balancer, create some VMs, add them into the backend pool, etc, ETCA. Or I can go to GitHub. GitHub has the Azure user and the Azure user has Azure Quickstart templates repository. In this template are hundreds of templates that you can choose from. As you can see, there are 970 contributors to this and almost 29,000 commits at this point. So there are tons and tons of templates. Now since I want to show you a load balancer, I can do Control F, enter the word Load and I can see that there are a handful of different load balancers to choose from.
So this is a load balancer standard create scroll a little bit down. I can see recovery services workload. That’s not it. Internal load balancer. Two VMs internal load, sir. Load balancer. IPV six. So depending on what I want to demonstrate, I can choose from many different templates. Let’s go to this standard load balancer. I’m going to go inside of it. Now that’s part of what’s cool is that there’s all these examples, but we can see here that we can just click this button to do the Deploy to Azure. So I’m already logged into the Azure Portal. I can click Deploy to Azure. Now it tells you right here that I’m going to get a standard load balancer which is more than a basic one, a security group, a virtual network, three network interface cards, and three virtual machines in a different availability zone.
So this is a pretty robust load balancer example. So I’m going to say deploy to Azure. Now it’s going to take me into the arm deployment routine. Here we can see that it is the load balancer standard create template and it is going to create eight resources. So I create a new resource group for it. It’s going to go into East US two. It says project name. And so this is going to be a function of various resource names. Location. I can put East US two as the location for most of the resources, put my standard test user ID and password, make sure I got that writing and say I. And so remember, this is going to create load balancer, virtual Network, network Security Group, three Network interface cards and three Virtual Machines. I don’t want to save that.
Now this is going to take a little while to deploy, but when it’s done, I’m going to have all those resources waiting for me to play with. So I just thought this GitHub repository, I’m going back to it. GitHub repository is a pretty cool way to deploy some resources without having to do all the work of deploying the individual resources and setting them up. And there’s also a pretty big interest in this Arm templates in the exam. So if you are studying for this exam and you should probably be spending at least some percent of time, maybe 10% of your time, looking at Arm templates and understanding the different second things like that. So I highly, highly, highly recommend Azure Quick Start templates to create resources that you can later play with.
So we have successfully deployed the launcher, three virtual machines, three network interface cards, public IP, network, security group network using that template and we can actually go into deployments of the resource group. And we see that the template deployment succeeded and it took around seven minutes to deploy all those things. Lots simpler to do than to set them up from scratch yourself. Now let’s go into the load balancer. So the purpose of this video is to look at all of the things that can go wrong with load balancer set up and how what I would do to diagnose those. Now, I want to start off by saying that there’s really only four or five setting blades within the load balancer. We can see them listed as front end, back end health probes, the load balancing rules and network address translation rules.
There’s also the outbound rules, but that’s not a rare thing that people are going to set up outbound rules for traffic. So go wrong with your front end configuration? Well, of course, this is the IP address that’s available, right? So if you’re going to enter that IP address into a browser, you’re going to get some type of result. It’s pretty cool that this template has a running web browser that says Hello World and the name of the server as part of the configuration. So a thumbs up to the template that does that, that could be an NGINX or something type thing. So the IP address is what is connected to the world, of course. So if you’re using a domain name, you have to make sure that that domain name resolves to the correct IP address. That’s all set up for domain resolution.
Now, the IP address is associated rules, so this will be under load balancing rules. But if I go under this, I can see that it’s Http LB rule. Okay? That’s the rule that it’s associated with. Now if I go later on here, go into the next one is back in pools. So what can go wrong with your backend pools will follow up. Are you pointing these load balancers to the correct servers? And you might have one, two, three, you might have several back end pools that are all resolved to answer for this traffic. You can see here that one of my virtual machines is in a state. I just stopped it myself. And so one of the things that could be happening of course, is that only two of your servers are doing the work and the third server is never called upon an unequal distribution of traffic.
Now, the reason why VM three never gets called, I can keep refreshing this all day. It’s going to be one and two is because of the health probe. So the health probe is what decides what back end pool is returning a normal response. So if we go into the health probe, we can see that this is an Http probe. We do have the choice between TCP and Http. Now, since this is a standard load balancer, we can also do Https, not available in a basic load balancer. So this is going to check using Http protocol, the root document on port 80 for the server that’s located on each of those servers. It’s going to check every 5 seconds and two failures basically removes that server from the distribution. So this server go to back end pools.
This server that is de alllocated has been kicked out of the pool based on the health rule. I can actually click on this and paste. This is a DS One CPU. It’s a small machine. It’s going to take up to start up, but we’re going to start that up and then hopefully we’ll see VMP start to get responses from our calls. Now, the fourth critical part of a load balancer course is the load balancing rules.And we saw the front end IP addresses connected rule and this rule basically takes the front end address and will send all traffic from port 80 in inbound to port 80 on the server. And so if you’re trying to do something where you’re accepting traffic on, let’s say it’s 80, 80 on the back end and 80 on the front end, that’s more of a network address translation set up.
But it doesn’t have to be the same port and you can actually do this type of redirect. So you’re going to have to have make sure that your server is listening on that port, the pool that contains the virtual machine. So make sure that your load balancer rule goes to the right pool. Again, the health probe has been chosen here. Session persistence is whether the same user gets sent to the same server every time or whether you can have any server responding to that user. This concept of idle is basically the client and the server have this open connection as opposed to it being closed and then having to be reopened. You wouldn’t want to set that up if you’ve got so many clients because that’s just a lot of open connections and it’s just something that the server needs to keep in memory. There’s taking up resources when you have this.
All right, so we’re going to leave the listing rules as they are. If I go back to the back end pool and I open this up, I keep doing that. I see that the VM three is now running. So it would be interesting to see if we can get our VM three to respond to our browser. And yes, as soon as I hit refresh, VM three is one of the ones that’s able to respond and in fact quickest to respond. Right now the final section is Nat rules. Now we don’t have any set up, but network address translation is more specifically when you’re taking in traffic over one port and sending it basically a port mapping to the target port it is incoming traffic sent to a selected IP address and port combination to a specific virtual machine as opposed to the whole group of them. So if you’re looking for that kind of setup, then this is where you’re going to debug any kind of problems with that. We don’t have any of those.
Now, if you do need to configure rules for traffic leaving the virtual machines traveling through the load balancer, you can set them up in the outbound rules here. So in terms of debugging your front end being set up correctly with the correct load balancing rule, your back end being set up correctly, your servers all in a running and proper state, your health probes pointing to a proper location that successfully tell your load balancer whether or not it is healthy properly. And finally, the rules themselves that tell you which traffic you can get through, which traffic cannot. So for instance, right now we don’t have an Https rule that would be basically blocked by the load balancer. Now, I should also mention that you can set up an alert process so that if there’s ever any health probe issues, back end servers that are not responding, that particular alert group is going to be notified or an action group here.
So we can say the resources, the load balancer under the condition we can see some of the signals available health Probe status being one, the number of bytes being sent through the number of packets whether or not the load balancer these are more administrative. You wouldn’t necessarily set these up, but Health Probe status is a good one. And so we can see the health probe was having problems, I restarted the server and number of problems are going to go up and go down, right? So we can choose the we don’t need to basically more than five failures in five adminute period means that you’re going to get a notification in your alert, your action group here and that could be an SMS message, it could be an email, that could be some type of job that kicks off, etc. Sure. So setting up alerts is another way of figuring out health probes are failing and your load balancer is having difficulties.
So we’ve just seen the load balancer, which is what’s called a year for load balancer. It operates at the transport layer. The load balancer that we only understands things such as IP addresses and ports. And so when you’re setting up the rules for load balancing, you have the to choose the source IP, source port, destination IP, destination port, and protocol type which is TCP or UDP. So that is the transport layer of the network stack. What the load balance does not understand is a URL. It doesn’t understand the path. Okay, you need to do load balancing based on what the user has typed in the URL. You need to go at a higher level of the network stack and that’s called an application gateway. So if I go into application gateway here, this is also a load balancer, but it is layer seven. It understands Http rules.
Now the other benefit of an application gateway is you can have optionally a firewall associated with it. Now with application gateway, there is no free application gateway. So unlike the basic load balancer, which doesn’t cost anything, your choice of application gateway is basically going to have a price no matter what we choose. So Azjd new app SJD new application gate, give it a name region I’m going to stick to, stick to east to. Now the first major decision we have in terms of the tier. Okay, so based on our choices, we’re going to have the standard or standard V two tiers or the web application firewall or do tier of that. So this of course, the standard application gateway can direct traffic, a load balancer where the web application firewall actually can parse the traffic coming through and block traffic that is known to be of a suspicious type, a sequel, injection attack or cross site scripting.
And those types of standard acts, a web application firewall can block them. So I’m just going to choose the standard tier. Now another that application gateway supports that a load balancer does not is the scaling of the gateway itself. And so if I just said no, then I’m going to have in this case two static L units of application gateway. If I say yes, I can scale from zero, in this case zero to ten, such that if the traffic demands it, it’s going to basically spin up more of application gateways to support the level of traffic. I can also place application gateways into specific availability zones. And so if I’m setting up my environment for high Availability and I want to place virtual machines into those zones and put my application gateway into zones, I can set up my true high availability environment because I’m actually physically placing those services where I need them to be.
I’m going to let Microsoft decide where that goes. Another difference between application and a load balancer is application gateway is like a device that lives on your virtual network. Okay? So you’re going to need to have virtual machines and virtual networks. Now, I don’t currently have the virtual machines created yet, but let’s create a VNet for it. I will accept the defaults here, so it’s going to create me a VNet with a default thing. Now, when we set up the load balancer in the last video, we use a template. And it did all that for us. But we can certainly have an internal application gateway, and that’s got a IP address, or, you see, standard VT support internal. Or we can have a public application gateway and then we’re going to have to give it a name.
We haven’t created virtual machines for this yet, so we’re not going to be able to define what’s in the back end pool. Now, the cool thing about application gateway is it does have more options for the back end pool. So you can have web apps as app services in your pool, virtual machine scale set virtual machines. Or what’s unique to an application gateway is the IP address. And so you can even have servers that are hosted in your own environment that have Azure application gateway as their front end. Or these servers can be hosted in AWS or in some other environment. So anything that can respond to an IP address can become part of the back end pool for an application gateway. So I’m going to create this back end pool, but unfortunately I don’t have targets for it yet.
So we’re going to leave the back end pool empty. So we can see in this visual diagram, we’ve got our front end set up, we got our back end, which is empty right now. The part that’s missing is routing rules. So we’ll call this rule one. And basically the listener is going to be listening for a particular traffic for a particular port. So this could be the listener. It’s going to be based off of its public IP. We’re listening over port 80. Okay. Error page. URL, no. And which target do we want? We’re going to send that traffic to port 80, to the back end pool. And on what basics are we going to send that traffic? So http settings we’re going to say http traffic for port 80. Now remember, in the load balancer we could have had affinity or session cookies.
Essentially this will send traffic from the client to the server and to the same server every time. It’s actually not a great practice if you can avoid it. Now sometimes servers keep things cached and so you can’t have clients going between multiple servers, but if you can avoid that, that would be better for a lot of things. Connection draining is the concept where you want to remove a server from your back end pool, but you don’t necessarily want to just cut off people who are using that server. And so you lay server, you don’t accept any new requests, but you let those sessions finish narrowly and so it’s going to take a longer time for your option to finish and your server to be taken offline, but it’s a much more friendlier to your users.
All right, so I’m just going to leave the rest of the settings. Now. This is, again, what stands out as being different from an application gateway, from a load balancer, is this path based routing. And so you can effectively have traffic based on different paths. So if you’ve got well, unfortunately, we only have the one back in Target, but if we had more than one, we can see everything that slash videos goes to these servers over here. Everything that slash image goes to those servers over there, and the rest of the traffic goes to the third set of servers. So if we had multiple back end pools, we could set up paths that will track that traffic to those various places.
Just like with any resources in Azure, we can add tags and finally we can click Review and Create. And so you can see, this is a bit more complicated setup than a load balancer, but with the addition for path based routing for the Firewall options and we haven’t talked about SSL termination where the application gateway can actually have secure certificates and negotiate with the client in terms of the secure connection. And then the connection from the application gateway to the back end is not secure or has its own security settings. It has auto scaling its zone redundancy. There’s a lot of benefits to application gateway over a load bouncer, and that’s as easy as it is to create it.
Popular posts
Recent Posts