CompTIA Network+ N10-008 – Ethernet Fundamentals Part 2
Spanning tree protocol, or STP. Now, the spanning tree protocol is an additional Ethernet feature, as we mentioned in the last lesson. But because it’s really important, I’ve broken it out into this video alone. It is called the 802 1D protocol, and so I want you to add that to your note sheet as well. spanning tree protocol 802 What is the name of the spanning tree protocol? Well, it allows us to have redundant links between switches and prevents loops in the network traffic. Now why is this important? You may remember that we talked about the availability of networks being measured in nines, and we want that five nines of availability or 99.99% uptime, which means I only get five minutes of downtime per year, and so I have to have redundant networks to be able to meet that mission. There’s a thing called the shortest path bridge, or SPB, and this is used instead of STP for large network environments.
We’re not going to dig deep into SPB because the exam doesn’t dig deep into it. They dig deep into STP. But keep in mind that shortest path bridging works similarly to spanning tree protocol, but on larger networks. So let’s take a look at a network without the spanning tree protocol and see why Mac address table corruption can occur. Let’s say that I have PC Two trying to send a message to PC One. You can see that there’s a redundant network here. It can take the path from switch four to switch two to switch one to PC 1, or it can take the path from switch four to switch three to switch one to PC 1. And that looks great. But if you remember how Mac addresses work inside switching tables, you’re going to remember there’s a problem here.
So when PC Two reaches out to talk to PC One, switch four learns that the CC Mac address for PC Two is coming in from it, and it’s going to broadcast that out to switch three and switch two, who learn that and put it into their Mac address table for port two. They then tell switch one, but because it’s coming in from both sides, switch one will broadcast it out both sides, which then feeds back to switch two and switch three, resulting in an endless loop, as shown in red. Because as that data starts going back, those interfaces start figuring out, “How do I get to the network CC?” Well, the Mac address for CC is now on both my interfaces for both switches, and they really don’t know where to go. And so this routing loop happens, and we get what’s called a broadcast storm. That’s what happens without the spanning tree protocol. Now with the spanning tree protocol, we’re going to solve that. Now let’s talk about that as we go through this broadcast storm that’s happening. If this broadcast frame is received by both switches, they start forwarding it to each other, and so it’s like, “I tell you a secret, you then tell me that same secret, I tell it to you again, and we just keep going back and forth.” More copies of that frame are forwarded each time, and it just keeps replicating and being forwarded until the entire network is consumed with copies of Where is this ARP packet? And it just takes over the entire network and crashes it. If your switch is having this problem, the only way to fix it if you don’t have the spanning-three protocol is to literally unplug the switch, wait about 30 seconds, and plug it back in. That’s not a great way to run a network.
And so we have to enter STP, or spanning tree protocol. The way it works is that it uses something called a root and a non-root bridge. Now, a root bridge is where the switch is elected to act as a reference point for the spanning tree. The switch is going to select the lowest bridge ID or Bid, and it’s going to be elected as the root bridge. The bridge ID is made up of a priority value anda Mac address with the lowest value being considered root. So if everything is considered equal, we’re just going to go with the manufacturer’s Mac address, which is the lowest, to select the root. Now, a non-root bridge is every other switch in the topology. So one root, no root for everyone else. Let’s take a look at how that looks. So here we have switches 1, 2, 3, and 4. Again, how is this going to end up looking? So, if I look at switches 2 and 3, their respective Mac addresses are 2 and 3. Accordingly, both of them have the same priority because they all have the same cabling. So who’s going to be my root bridge? the one with the lowest Mac address.
So it’ll be switch number two in our case. And then that makes switches numbers three, one, and four all non-root bridges. So when we look at the root bridge, we also have to look at the concept of root ports, designated ports, and non-designated ports. So every non-root bridge has a single root port; one of those ports is on the switch, and the port closest to the root bridge in terms of cost and number is going to be the route port. If all of the costs are equal and costs are determined by the cable type and the throughput of them, then the lowest port number on the switch will be chosen. Every network segment is going to have to have at least one designated port, and the port that is closest to the root bridge in terms of cost will be the designated port. All ports on a root bridge are considered designated ports. And I’ll draw you a diagram to help you understand it better. Here. The non-designated ports are ports that are going to block traffic. And this is the benefit of STP. This is where your loop-free topology comes in. So let’s go back to the diagram. Here I have a single root port on a non-root bridge. The non-root bridge was switch number three. I’ve designated that as purple because that is the lowest port number port. It’s port one versus port two, and it also has the lowest cost because the cost is 19 because it’s using Cat 5 or fast Ethernet cabling. The faster the cable, the lower the cost.
All the other ports on the nonroot bridge, switch number three, are considered non-designated, which would be red. If you think of red, it’s like a stop sign. No traffic will come through that port. When I go to the root bridge, which was switch number two, all the ports are considered designated. So those are all going to be blue in color, as shown on my diagram. So when traffic comes in from PC 2 to go to PC 1, what’s going to happen? Well, port number two on switch three, which is red, is not going to let the traffic go through. And so traffic is going to go from switch four to switch two to switch one and over to PC 1. If it comes all the way around from switch four to two to one to three, it’s going to stop at the root port because, again, the non-designated port is not going to allow it to broadcast back through. As a result, our loop is broken. It’ll make a C instead of a circle. That is the entire advantage of root and non-root bridges. Now, the ports go through a couple of states. As they do this, non-designated ports are not forwarding traffic during normal operation; that’s the red stop. But they do receive the bridge protocol data units. So they’re getting the information, but they’re not doing anything with it, and they’re not forwarding it. Now, if a link in the topology goes down, the non-designated port will detect that failure, and they can determine whether or not it needs to transition to a forwarding state and become either a designated port or a root port. As it goes through that forwarding state, it transitions through four different states. Those four states are blocking, listening, learning, and forwarding. So first, when it’s blocking, when it has that big red X on it, a bridge protocol data unit is received, but it’s not forwarded. They’re used at the beginning and on redundant lengths, as shown on the display that we had a couple of slides ago. Then it’ll switch to listening. And it will do this by populating its Mac address table and starting to learn, but it is not forwarding the frames. So again, we have a C, not a circle.
Next, it’s going to start learning, it’s going to process those bridge protocol data units, and the switch is going to determine its role in the spanning tree. It’s thinking, “Do I need to become a root port?” Do I need to become a designated port, or should I just stay non-designated? And then, if it decides it needs to go to one of those states, it starts forwarding the frames, and that’s called forwarding. It begins to forward the frames and then takes over. So in the case of our example, we have our root and non-designated ports that are blocking. We have our designated ports that are forwarded. As a result, switching three is not transmitting traffic. Everything’s going from switch four to switch two to switch one to PC 1. Now, if switch two goes down, what will end up happening is switch three will go through those four states, and it will then be able to start forwarding that traffic, going from blocking to listening to learning to forwarding. And it will take over as the root bridge, and its ports will become the root ports and the designated ports, and they’ll be able to keep forwarding on. Now, when we talk about link cost, link cost is associated with the speed of a link.
The lower the link speed, the higher the cost. So, as you can see here, I have ethernet, which is Cat-3 cabling, or ten megabits per second, on the bottom of the network that is going to cost $100. When I go to fast ethernet, which is Cat 5 or 100 megabits per second, the cost goes from 100 down to 19. Now, you don’t necessarily have to memorise these numbers for cost, but you should realise that the lower the speed, the higher the number. The higher the speed, the lower the number. Long STP is being used as higher link speeds are being developed. And we didn’t really have room here; as you can see, with a fibre connection or a Cat 7 connection, which is ten gigabits per second, we had a cost of two. Well, if I went to 100 gigabits, I really couldn’t go much less than two. So, with this long STP that is being released and adopted, the values range from 100 for Cat 3 up to 2 million for Cat 3, and as low as two for a ten terabit per second. So again, don’t worry too much about the STP cost itself. If you’re working on a network design, you’ll google the cost tables and have them in your hand. You don’t need to memorise those for the exam. But do remember: lower speed, higher cost, higher speed, lower cost. It is an inverse relationship.
virtual land, or VLAN. The concept of virtual local area networks, or VLANs, is the next major concept that we must incorporate into COVID. Now, we talked about switch ports all being in a single broadcast domain, and to break them up, we actually have to use a layer 3 switch or router. Well, VLANs allow you to break out certain ports to be used for different broadcast domains, like using a virtual router. Before VLANs, we had to actually use additional routers, cables, and switches to separate our departments, functions, or subnets. But with the advent of VLANs with layer-3 switching and a lot of layer-2 switches, they now allow you to have different logical networks that share the same physical hardware. And this is going to provide you with additional security and efficiency that you just don’t get with using everything in one broadcast domain. So before we had VLANs, you had diagrams that looked like this.
And so if I had the IT department and the human resources department and I wanted to keep them separate for security, I had to plug them into different switches and different routers and route the VLANs between them. Now, if I had the IT and the HR on floors one and two, I might have switches one and three on the second floor and switches two and four on the bottom floor. But with VLANs, I can actually consolidate all of that into just two switches, one per floor, and then logically separate those out. So notice how the IT department is wired into the switch, and how it logically trunks down from switch one to switch two all the way to the router, keeping it logically separated. Despite the fact that these switch ports are now in different VLANs, they are all in the same physical hardware and share the same cable. That purple and blue cable going from the switch to the router is actually only one cable. This is a logical diagram, not a physical one. So VLAN trunking is the protocol we use to do that. We use it; it’s called 802-1 Q, and that’s where we can merge those onto a single cable that we call a trunk. Multiple VLANs are done over the same cable.
This is again reducing the amount of physical infrastructure while still giving us logical separation. Now, VLAN trunking, or 802.1Q, is something you need to be able to recognise on test day. So add that to your note sheet. VLANs are tagged with a four-byte identifier. They have two pieces to that: the TPI and the TCI. This is the tag protocol identifier. And the tag control Identifier.When you have one VLAN that’s left untagged, that’s called your native VLAN, also usually referred to as VLAN zero. You could see the packets here on the screen. Again, you don’t need to memorise the way these packets are laid out. This is just a graphical representation of what 802 one Q looks like in real life. What you really need to know about VLANs is that they’re great for security. And if you’re using VLAN trunking, you’re using eight. Two-one-Q is the standard for VLANs.
Specialized network devices Now, there are many different types of network devices out there outside of the standard routers switches, hubs, bridges, servers, and workstations. Other devices are out there that perform specific functions that will improve our usability, our performance, and our security. Many of these devices include things like VPN concentrators, firewalls, DNS servers, DHTPS servers, proxy servers, content engines, and switches.
We’re going to talk about each of those in this lesson. The first is a VPN concentrator. Now, a VPN is a virtual private network, and it creates a secure VPN or virtual tunnel over an untrusted network like the Internet. So if you’re at home and you want to be able to dial into your office either over a dial-up connection or over the Internet using broadband, you can do that using a VPN connection, which creates an encrypted tunnel. So nobody on the Internet can see what you’re doing, but you can get from your home to your office securely. One of the devices that terminates this VPN tunnel is called a VPN concentrator, and this will allow you to have multiple VPN connections in one location. Now, a lot of firewalls will also perform this function, but logically, it would still be functioning as a VPN concentrator.
So if I have a headquarters in Washington, DC, But I have two other locations, one in Los Angeles and one in New York City, where I can create site-to-site tunnels and have those locations connect back to my headquarters securely through the cloud, which is the Internet. We’re going to talk more about VPNs and VPN security in a separate lesson because they’re just extremely important to the security of your networks and remote access. Firewalls are the next option. Now, firewalls are network security appliances that are placed at the boundary of your network. Firewalls can be software or hardware, and they come in stateful and stateless methods. We’re going to talk specifically about firewalls in depth in a future lesson as well, when we get to our section on network security. But for right now, I want you to remember that firewalls allow traffic to go from inside the network out to the Internet, and they can block stuff coming from outside the Internet into your network. Now, on the screen, I have three different ways that you’ll see firewalls shown in diagrams. The first way is what Cisco likes to use, which is a picofirewall, which looks almost like a diode with that triangle line method on the left. The next thing is that some people will just put up a brick wall in their diagrams, and that represents a firewall.
And the third is when you have a firewall combined with your router, which basically looks like a router with a firewall or a brick wall wrapped around it. All three of these icons are used to demonstrate firewalls, although the most common is probably going to be the brick wall that you’re going to see most of the time. Now, besides a regular firewall, we have these things called “NGFWs,” or “next-generation” or “next-gen” firewalls. These can conduct deep packet inspection at layer seven. A regular firewall is really going to block things based on the IP address and maybe the port and protocol. But with next-generation firewalls, it can look through your traffic to detect and prevent attacks. They are much more powerful than your basic stateless firewall or even your stateful firewalls. And they will continually connect to cloud resources for the latest information on threats to make sure they know what all those signatures are. Again, we’re going to talk a lot more about firewalls and network security later in the course. Next, we have IDs and IPS intrusion detection systems, or intrusion prevention systems.
Now, IDs and IPSs can recognise attacks through signatures and anomalies. They can recognise and respond if they’re an IPS. So a detection system can only see it and log it, but a protection system can actually see it, log it, and try to stop it by shutting off ports and protocols. These can be host-based or network-based devices. And in your network, they’re going to be seen with those two lines going left to right and a circle through them that is considered an ID or IPS sensor. And the same diagram is going to be used for both of those devices. Normally you’ll see the letters IDs or IPS written on it to dictate which one it is. Again, when we get to network security, we’re going to dig deep into IDs and IPSes because they’re a very important security feature in our networks. Next, we have the Domain Name System, or DNS. If you have your own host resolution, you will now have a DNS server in your network. This is going to convert domain names to IP addresses. And if you want to think of it this way, it’s similar to a contact list on your phone.
So nowadays, how many phone numbers do you have memorized? Probably not a whole lot because you pull out your cellphone, you scroll to the person’s name, and you hit them with your finger and it dials them up, right? So, for instance, if I want to call my wife, I scroll to her name, push her name on my phone in the contacts, and immediately it dials her phone number. I don’t have to memorise those ten digits. That’s because we, as people, remember names better than numbers. And the same thing goes for computers, except they remember numbers better than names. Now, if I told you to go to my website and said, “Hey, go to Diontraining.com,” that’s a lot easier for you to remember than having to remember that it’s 66, dot 123, dot 45, dot 237. Isn’t it not as jingly as Deon training? And so the way DNS works is when my computer is told to go to DON training. It reaches out to a DNS server and says, “Hey, who is Dion Training?” And then it will reply back, “Here’s his address; he’s at 66.” And then the computer can go off that way. That’s all DNS does: convert names to numbers and numbers to names.
Now, one of the concepts with DNS is what’s called a FQDN, or fully qualified domain name. This is when a domain name is registered with a top-level domain provider. So the most common top-level provider is but we also have mil.edu org net. And so we’ll use the example of Dion’s training. We have multiple servers. We have a web server at www.deontraining.com. The top-level domain extension is.com. The domain name that I use is deontraining. To be fully qualified, it would be www.deontraining.com. If you want to get to my Web server, that’s where you go to. If you want to go to the myfile server, you might go to FTP.Deiontraining.com. If you want to go to the mymail server, it would be Mail Diontraining.com. All three of those are fully qualified domain names. They have a service, a name, and a top-level domain. And that works the same way no matter which domain you’re looking at across the Internet. Now, if I wanted to take it a step further, I could look at it from the perspective of a URL, which is a uniform resource locator. So again, taking my web server analogy with www.deontraining.com, that’s the fully qualified domain name, but that doesn’t tell you how to access it. Should you access it securely or insecurely? Well, if you’re going to give me your username and password, you should do it securely. So we add https://www.diontraining.com, and that becomes a uniform resource locator because it has that Https at the beginning. Next, we have DNS record types. So DNS is a very complicated system, but this page is going to be your cheat sheet for DNS.
So we have A records, which are going to be the address hostname for an IPV4 address. Deon’s training level is 66, and so on. That’s an A record. There’s a quad, a record, or AAA. It’s the same thing. But for IPV six addresses, there’s a CNAME, which is what stands for “canonical record,” which is an alias for an existing record. For example, going to Deontraining.com is the same as going to www.diontraining.com. And I can make them equal by using that C record because I can only have one record for a particular IP address. Next, we have an MX record, which is a mail exchange record that will map your domain name to an email server. So if you’re dealing with email, it’s MX for mail exchange. NS is the name server. It’s the DNS servers’ authoritative source. And so anytime you see NS, I want you to think about name servers for domains. PTR stands for pointer. It’s a pointer record that points to a canonical name, and it’s used for reverse DNS lookups, so the CNAMEs and the pointers go together. Next, we have the start of Authority Or.
So this is going to provide authoritative information about the DNS zones, and this is going to allow us to have things like our contact information, our email addresses, our primary name servers, how often we should refresh, and all that other administrative stuff. Next, we have the SRV, which is a generalised service record. This is a newer protocol that doesn’t require specific protocols like MX and C names or A records, and it’s part of the revision to DNS. It hasn’t really taken off fully yet, and people are still using A records, C names, pointers, and MX records. So you’re not going to see SRVs very commonly. And finally, we have TXT, which was basically designed to hold human-readable information or code. This was used to hold machine-readable information like domain keys, identified emails, DKIM Sender Policy Frameworks, or SPF, and opportunistic encryption.
So it was originally designed for humans, but we’re now using it for these other machine things as well. Again, you’re not going to run into TXT records very frequently. The big ones in DNS are A records, AAA records, CNAME records, MX records, and NS or name server records. If you have a good handle on those five, you’ll be able to answer pretty much all the DNS questions you’re going to see on the exam. Next, we have DHCP, or the Dynamic Host Configuration Protocol. Now, initially, we used to have to manually give an IP address to every machine on the network, which is not a big deal in your house, where you might have three or four machines. But on the networks I work on a daily basis, we have hundreds of thousands of machines. That is quite a bit of configuration. So this protocol was invented and is called DHCP, and what it does is it can actually eliminate configuration errors because if I’m typing in IP addresses, I can fat finger it and type in the wrong one. And it became a really big hassle for large networks.
As a result, DHT will assign them to you automatically from a scope. Now with DHCP, it’s going to automate the process so that when the device comes online, it’s going to reach out to the DHCP server and do a discovery. It’s going to say, “Hey, DHCP server, I need to discover an IP address.” The DHCP server will say, “Okay, does this address look okay?” and it will offer an address. Then the computer will say, “Yes, I like that address; I request to take it.” That’s the DHCP request, step three. And finally, the DHCP server will acknowledge that with a DHCP acknowledgement and say, “Okay, that’s your address.” You can borrow it for X amount of time, which is called a lease. And the information you’ll get from your DHCP server is divided into four parts: an IP address, a subnet mask, your default gateway so you can find the router, and a DNS server so you can do a name. Lookups: with those four pieces of information, your computer can get online, get out of your network, and get onto the Internet. Now, how do you remember the four steps of DHCP? We have a pneumonic for that. I like to think of Dora the Explorer. It is to discover, offer, request, and acknowledge Dora. Next, we have a proxy server. And this is a specialised device that makes requests to an external network on behalf of a client. It’s a go-between.
Now, why would we do that? Well, there are really two functions. The first was for security because it can perform content filtering and logging. So on my network, I have a proxy server in my house. And so if my kids try to go online, it goes to the proxy server, checks what’s allowable for them, and then decides whether to let them go out or not. So, for instance, if they try to go to a pornographic website, it blocks that. But if they try to go to the Disney Channel, it will allow that. Now, workstation clients are configured so that all of their traffic has to go through the proxy server. So, as you can see here on the diagram, if that’s my son’s computer, he’s going to make the request. It goes to the proxy server. The proxy server checks if it’s on the allowable list. If it is, it goes out to Disneychannel.com, gets the information, brings it back to the proxy server, and then gives it to my kid. Now that’s function one. The second function of a proxy server is that they can have a cache in there that will actually store a copy of the information that was requested by a user.
So in my case, I have two children. Let’s say that my son goes to Disney Channel, and then right after my daughter tries to go to Disney Channel, well, the proxy server has already made the request and has it locally. So, once it has given it to my son, it can simply give it to my daughter without having to go back out onto the Internet, saving bandwidth, resources, and time. Now, proxy servers are pretty good at that, but they’re not the best. And so another device came into our networks called a content engine. And these are solely responsible for this proxy server’s caching functions. They’re more efficient than a proxy server, and we call them content engines or caching engines. So where they really become beneficial is if you have a big headquarters with big, beefy Internet pipes, but then you have this small branch office kind of in the middle of nowhere. Well, if you have to get a lot of data across a small pipe, like a VPN pipe or leased line, that can be a big data bog and slow down your business at the branch office. So you could put a content engine there, and in the middle of the night, the headquarters would update the information in the content engine.
Then, during the day, whenever someone requests that information, it arrives locally in the office via your gigabit Ethernet, rather than via this slow dial-up or leased line connection. Now, this is very, very useful with remote branch offices. Otherwise, if you’re in a big headquarters or a big branch office that has a large pipe, you probably don’t need a content engine. You can just go over the wall to get that information from the headquarters. But again, if you’re trying to speed up local access, content engines are great for that. Next we have a content switch. And a content switch is going to distribute your incoming requests across various servers in a server farm. These are also known as load balancers. Now, why do we need these? Well, let’s take the example of Amazon.com. Do you think Amazon.com can handle the amount of load it gets on a daily basis with just one physical server? Of course not.
They have millions of users accessing their content, so instead they have hundreds and thousands of servers. All of those must now be able to respond to Amazon.com. And that’s where the content switch comes in. When you request www.amazon.com, it goes in through their router to their Content Switch, and then it starts handing those requests out to different servers. So if I had a big task of work to do, I might have 20 people do the work. I might be acting as the content switch. So when my boss comes to me, he says, “Hey, here are the 20 things I need done.” I can then hand out one piece of paper to each of my 20 employees. They can all work on it, hand it back to me, and then I will give it to my boss. That’s essentially what a content switch is doing. It’s filtering out those requests to different people based on workloads. Again. These are referred to as load balancers or content switches.
Popular posts
Recent Posts