AZ-104 Microsoft Azure Administrator Associate – Configure load balancing part 1
So in this section of the course we’re going to talk about load balancing. Now there are two kinds of lumpsters within Microsoft Azure. The load balancer is called a four load balancer and there’s another one called application gateway which is a levels load load balancer. Now these have got a lot of things in common but there are some differences. We will talk about load load balancing and we’ll talk about the application gateway after that. Now what is load balancing? Now basically load balancing is the concept where you have traffic that is coming in from another source. So that in this case it could be the internet or it could be from another application. Elsewhere within your Azure network traffic is trying to reach one server. But instead of having a single server doing all the work you’ve decided to distribute the load across multiple servers, two or more servers doing the same job.
The load balancer what does the distribution to ensure that one server doesn’t get overwhelmed with all of the trick. Now in this case the load balancer which is the level four. Only those determinations based on five factors which are the source IP import, the destination IP import and the protocol. So based on those factors it can decide which backend pool is going to be the one to send traffic out of those. It’s going to use an algorithm to distribute the load based on some algorithmic weight. Now that means that the load balancer is fairly dumb device. There’s not a lot of intelligence here. It really is just someone comes in the door, sends them to server one. Someone comes in the door, sends them to server two. Someone else comes in the door, sends them to server three and that’s called a round robin algorithm.
There are other ways of handling this. Now that’s one of the benefits of a load balancer is basically able to drink the traffic that would normally go to a single server into multiple servers. The other bit is that it is able to detect if something goes wrong with any of these servers and basically drop out of the rotation. So let’s say at some point server three stops responding, you send it to traffic and then a timeout happens and it never returns with a response. We can then make that determination after one or two or more that server three is dead and we can basically just cut it out and only central server one and two. So not only does it distribute the traffic but it also monitors the health of the recipients and can then cut them out of the rotation when they are no longer healthy.
So that is the primary function of a load balancer. Go into under networking we can see that we have these options. Let’s create ourselves a load balancer. Give it a name. Is that SJD new LB? Classic name for a load balancer. First decision that we have to make is whether it’s a public or a private load balancer or internal. Now, a public load balancer has a public IP address and a public name. That means that it’s addressable from the open Internet. If we made it an internal load balancer, it has a private IP address, does not have a public IP address and is absolutely not accessible from the open internet. Okay, so we want a public load balancer, then we need to create in a public IP address. So let’s give the IP address a name. We just want a basic address. We do not need a static IP that never changes.
We are fine with an IP that will co occasionally change because we’re going to use the name of the IP address, the domain name to address that, not the IP address itself. We can optionally choose an IPV six address within Microsoft Azure, the load balancer is one of the few device that supports IPV six. So if you have an application that you want, need to work over IPV six and need an IV six, then you are forced to put a load balancer in front of it. So even if there’s only one virtual machine, you would put a load balancer in front of that to be able to create a V six address, then the load balancer is going to address translate from the V six address of the load balancer into the private IP addresses of the machines behind. This is my usual solution. I created a brand new resource group for this. I call this New York app. So I’m going to create this load balancer as the first resource side of my New York App resource group and let’s leave it into East UFU.
Now, while we’re waiting for that load balancer graded, we did notice that Azure gave us the choice between a Basic and a standard load balancer. Okay? Now, their standard load balancer is a relatively new feature set within Azure and Basic is the default. But you can’t choose standard. Let’s look at this web that shows you the difference. So, Basic load balancer supports up to 100 instances behind the scenes, whereas a standard 110 x is that to 1000. By the way, this table formatting is a bit skewed, but that’s not within the standard load balancer. We can address any virtual machine in a virtual network and it could be a blend of virtual machines availability and VMSs scale sets. Whereas for the Basic loan sir, you get the choice of a single availability set or a single virtual machine scale set. It does support Https trick and load balancing in terms of health probes.
I guess if your web app or your website only supports Https traffic, then you are forced to use a standard load balancer. It does have some additional stay alive options for the health probe. It supports the new availability zones. We haven’t talked about that yet, but it’s another way that applications can be distributed for high availability scenario stamp balancers can support that, basic low bouncers cannot. We’ve got the ability of Azure Monitor which is Microsoft’s new monitoring dashboard. We can pull in login events from this load balancer into that secure by default. So load balancers, basic loaders are default open and you would have to then set up your network security group to shown traffic to load bouncers whereas you would need whitelisted be white listed in order for traffic to come in.
This is the new thing for a lot of resources blocked by default, denied by default as opposed to a few years ago when a lot of things opened by default. So we can start to see on this type of page that there’s some very advanced things in terms of different outbound rules basically allowing traffic to be sent outbound based on addresses or public prefixes, et cetera. Management durations are a lot quicker. There’s an SLA, the SLA here remember in virtual machine SLA is two or more virtual machines 99. 99% whereas the SLA is on the load balancer itself here and there’s pricing and more complex pricing whereas basic load balancer is free. So you do have some decisions to make pretty much an environment you probably want to stand pick the standard load balancer simply because of SLA although you have to investigate the pricing for that.
So we can see that our load balancer got created and our public IP address associated with that load balancer. Let’s go into the load balancer. Now, we’ve only created a load, but we haven’t hooked it up with anything. There’s no servers, there’s no back end pools, virtual ones that are waiting to receive receive traffic. We haven’t set up the health probes in order to determine if those servers are healthy. So what I’m going to do is I’m going to create three Virtual Machines and an availability set, and then we’re going to show you how to add that availability set to this. Again, it’s a sick level, basic skew load balancer. So we’re restricted to only virtual machines, individual Virtual Machines or Virtual Machines and availability set. So I’m going to pause this and create three Virtual Machines and availability set.
All right, so I’ve created three new machines in an availability set. Let’s add this to the bouncer. So we go under back end pool and we say add. We have to give it a name, AZ, load balancer, back end pool, IPV four. And it’s a matter of associating with one of three things. Now, I can associate it with an availability set, which is what we’re going to do in this case. We can associate it with a single virtual machine, which then we’d have to do it multiple times a load balancer to have multiple virtual machines, or we can associate it with the Virtual machine scale set. Remember, we saw in the basic load balancer, we choose one of these options. So the question is what availability set? Now the availability set must exist within the same region as your load balancer. Makes sense, because if you think of your load balancer as running on a server within East US. Two region.
You want your virtual machines also in East US. Two region don’t have your Virtual Machines in Europe, while your load balancers in the United States. That’s going to be a very high latency thing to do, and obviously Microsoft doesn’t even allow that. So the availability set must exist in East US. Too. And we’re not done yet. We still have to add the Virtual Machines inside of this. Okay, so we’ve got our availability, and now we can choose our Virtual Machines. I’ve created Azed VM one, VM two and VM three. There’s one network interface card on each of those. And so we’re going to choose the one network interface card. We’re going to go to the second virtual machine, choose that network interface card, and the third virtual machine and the third network interface card. So now we have to click that, but let’s do that. All right, now we have three machines that were in availability set that are going to be part of our back end pool. Let’s click.
Okay, I guess it doesn’t like that, so let’s delete that click. OK, that’s going to take a few minutes for this back end pool to be constructed. While that’s running, we can see that some deployments are going on here. Let that run. Let’s talk about health probes. So we’re going to go on to the health probes section. Remember we said that load balancers, they’re pretty dumb machines, except they do have the ability to check on the health of the virtual machines in their back end pool. And then if they detect one is not responding within a reasonable time, then it will kick them out of the virtual back end pool. So let’s create a health probe. Got to give it a name like usual, so let’s call it Health. It’s IPV four. Now, the health probe can either check on the status of a port so case, it’s just going to ping port 80 over the TCP protocol. And then every 5 seconds in this setting, every 5 seconds it’s going to touch port.
And if it fails to open port 80 twice in consecutive, so it takes about 10 seconds of failures before it says, ah, unhealthy health probe has been triggered. So this is a within 10 seconds, this will detect that poor ad is no longer opening. Another popular way to do that is based on Http probe. So even if port 80 is responding, that doesn’t mean that your application is healthy. So you may want to pull in the web page that’s at the root, the homepage, if you will. And then if it’s able to get a 200 success code from the home page, then that’s considered a healthy probe or two consecutive failures to be unhealthy. A lot of people will create a standalone web page like Health HTM or Probe HTM, where you can make this part of your application that you’re not pulling down your home page, which might be a fairly heavy page and might cause your application some performance concerns if you’re hitting that home page every 5 seconds.
And so creating a standalone page that is simply for the health probe to check on the status. Yes, smart alternative. Okay, so you’ll see that there’s pros and cons to checking just the port or checking the application or having some kind of proxy page that is designed to fail your application isn’t up. Now be careful, obviously with an HTML page that could very well continue working while your home page is not responding or while your net application is not responding. So there’s some strategy in terms of choosing your healthiness. Let’s get to the TCP probe. So that doesn’t matter what application we have running, as long as port 80 is responding, then we are considered healthy, actually considered healthy for those servers. So now we’re creating a health that is simply simply going to be happy if port 80 is responding.
Now, for these virtual machines that I’ve created, I actually haven’t logged into them to set up, to set up web servers on them. I haven’t added the web server role. So I imagine Port is going to fail. So while we’re waiting for this back end pool to finish updating, while for the health probe to be created, we know that servers are not going to respond on port 80 because I haven’t set them up yet. So I would expect all three servers within 10 seconds of them being live to basically kicked out of the pool because they are not responding on poorly. All right, we have our server set up, we have our health probes. Now let’s set up the load balancing rules because as it stands right now, load balancer isn’t configured. It doesn’t have any rules and so we’re going to have to create one. So at least one set of load balancing rules. So it’s an IPV for load balancer.
We’ve got our IP address which is load bouncer front end. We are accepting traffic from the public over port 80. We are sending traffic to the other VMs over port 80. So there is the possibility of what’s called net address translation, network address translation. So we could have 80 80 set up on the VMs and then the load balancer handles the public port 80. That would be a security set up. But we’re going to leave it as a pass through where port comes in and port 80 goes out. We’ve got our pool, which is the only pool. We’ve set up the virtual machines. We’ve got our health probe, which is the only health probe that we have. We do not support sessions and so each client that comes in is going to be put to a random via virtual machine. We’re not going to then send a client to the same virtual machine.
We could of course say the same IP address always goes to the same virtual machine if we wanted to enforce have sessions, but in this case we won’t. And based basically we can ignore this idle, we can ignore this IP address and say okay, and this is basically what is required between the back end pools, the health probes and the load balancing rule in order to get this load balancing working. So once this rule gets deployed, then we’re going to have a working load balance.
So right now we have a basic load balancer that has a single public IP address that is sending traffic to the set of virtual machines, has a health problem, and we have a single set of load balancing rules for port 80. Okay? Now, there are times when we have our virtual machines with more than one type of application location running on it. So we can actually create another public IP address on the same load balancer. So let’s add the second load balancer IP. To load balancer, we’re going to choose from our existing public IPS. I created a public key called second IP, and we’re basically going to say the second IP is now assigned to this load balancer. Okay, it’s going to take a second to update, but we don’t have any defined for that IP address. So right now, the load balancer wouldn’t know what to do with that traffic.
Again, it wouldn’t be configured yet for that traffic. Let’s assume though, that we leave our back end pool the same. We’re not adding any new servers. These three virtual machines are still the virtual machines that are going to respond to all traffic. They go into load balancer rules, and we’re going to add a second set of rules. Now, this is second rules. Give it a unique name here. Now, instead of using the default IP address, we’re going to use the second IP address that we created. Now, since we’re sending traffic to the same back end pool, we probably don’t want to, we can accept traffic on PD, but we probably want to use a different IP address port for that. So if to go into the 80 80 port, then we can have two applications running in our virtual machine.
One responding to port 80 for the other rule, and a different application set up an Is bonding to port 80 80. So using different ports allows you to have multiple web servers essentially responding to traffic for the same IP address. Now, we pretty much could rely on the same health probe if we wanted to, but since we’re a different port, we probably should create a new health probe for port 80 80. Okay? And I could do that, but I won’t. But basically we would want a second health probe for the second application, same settings in terms of sessions and timeouts, et cetera. Hit OK. And now we have a single load balancer that’s accepting traffic from two IP addresses, two unique domain names, for instance, and sending the traffic to the same back end servers using different rules.
So accepts it on port 80, sends it back on port 80 that’s being updated now, accepts it on port 80 for different IP addresses, but sends it to the backend pool over port 80 80. And so we can, we can see that we can have multiple rules in the same load balancer, basically handling all traffic going to this backend pool. And we really want to get fancy. Now, in the basic load balancer, we can only support this one pool. In a standard load balancer, it allows us to have a mixture of virtual machines or devices. Remember? Okay, so this is the top level overview of a load balancer, which again, it’s only handling IP addresses. You’ll see that we have the option to configure based on URL, domain name, path, anything like that. Purely doing traffic filtering based on IP addresses and ports. The application gateway is what allows us to do that higher level traffic.
Popular posts
Recent Posts