VMware VCA 1V0-701 – VCA-DBT – vSAN

  1. vSAN Basic Architecture

And the first use case is business critical applications. So with VCN, we can choose between either a hybrid or an allflash configuration. In the hybrid configuration, we’re still using traditional magnetic hard disks. In an allflash configuration, we’ve completely eliminated hard disks. And as a result, we can deliver extremely fast performance up to 150,000 IOPS per ESXi host with latencies under a millisecond in some cases. And more than 60% of VMware customers use vSAN according to the VMware website. So it’s a great way to run business critical applications that require extremely high performance with extremely low latency. But probably the most well known and most common use case for vSAN is Desktop Virtualization. This is what I hear from my students, a lot of students using vSAN for Desktop virtualization, and the reason that a lot of them use it is because it has the high speed required for VDI.

So when you’re running Desktop Virtualization, the speed of storage is absolutely critical. The people using Virtual desktops need to feel as if they’re using a regular computer. They can’t tolerate any kind of slowness when it comes to storage because they’re used to traditional physical disks that are directly attached to their machine. So it’s critical of Desktop Virtualization that storage is fast. Now let’s take a moment to look at an interesting document that I want to share. So this is the VCN six two Virtual Desktop Infrastructure workload. You can see it was last updated in October of 2018. And what it essentially does is it takes you through a sample workload and how it performs on a hybrid vSAN cluster versus how it performs on an All Flash vSAN cluster.

And the reason that I recommend this document is especially if you are in charge of making decisions about which technologies are ideal for your use case, you can get a really good idea of what the performance of a hybrid vSAN cluster is in comparison to the performance of an All Flash Vs configuration. And this is going to help you understand latency in milliseconds CPU utilization of your ESXi host I Ops and average latency in a hybrid configuration versus what is the performance of an All Flash configuration.

And you can see here the latency for the All Flash configuration is below one millisecond in many cases. So this is a nice document to kind of help you understand what your VCN options are and how they fit with whatever your use case happens to be. I highly recommend taking a look at this if you’re planning a Desktop virtualization project. Leveraging vSAN. Another great case for vSAN is remote and branch offices. So let’s say you have a small office with only two ESXi hosts or three ESXi hosts. You can leverage vSAN to create shared storage at those small locations. You can manage those vSAN clusters using your Vsphere web client or the Vsphere HTML Five client.

And now you can enable features like V Motion High availability and DRS, even at those smaller locations that don’t warrant the expense of a dedicated physical storage array. vSAN is a great solution for your disaster recovery site. Let’s say you’re establishing a dedicated disaster recovery site, but you don’t want to build an entire storage array there. Well, now you can use vSAN at that Dr site to provide shared storage and a Vs fare data store that can be used to recover in the event of an emergency. You can replicate your data from this similar storage. So if you do have that expensive fiber channel storage array at your primary site, that’s fine. You can use Vsphere Replication. You can use Vsphere data protection. You can back up and replicate all of your data from your primary location to your Dr site that’s running vSAN. And it’s also compatible with Site Recovery Manager as well so that you can completely orchestrate your disaster recovery response. Now let’s take a quick moment to look at the vSAN licensing.

And we’re going to go to the vSAN product page to view this. So here we are at the VMware vSAN product page, and I’m going to simply click on Compare, and Compare can take us to the licensing edition. And now I’m going to view the VCN. Licensing and packaging. White paper. So these product pages are always a great landing spot if you’re looking to find just about anything related to some Vsphere product or feature. Landing pages for the products are a great way to find that stuff. And here we see the VCAN. Six Seven Licensing Guide.

So there’s different licensing editions. There’s one specifically for virtual desktop. There’s upgrades. There’s remote office and branch office licensing editions. We can even license stretched VCN clusters. So this will walk you through the different licensing editions available and what features they actually include and how they’re licensed. VCN Standard Enterprise and Advanced Editions are licensed per CPU Socket. You can license it for VDI as well.

Exclusively for Virtual desktop using Horizon View. You can also purchase upgrades. You can purchase remote Office or Branch Office licenses as well. So if you’re looking for the actual specific cost process, then you should probably talk to your Vamar sales rep all the way at the bottom of this document. There are a few example scenarios that can help you understand. Hey, here’s my scenario. This is the type of licensing that I’m going to require. So that can really help you to sort of zero in on the exact licensing addition that would make the most sense for.

  1. vSAN Use Cases

And that’s really the purpose of vSAN. We’ve got all of these features like Ha and DRS and Vmotion that require shared storage. And when I say shared storage, I mean they require a storage solution that is available to multiple ESXi hosts. And that’s what vSAN is. We configure this feature on an ESXi host cluster, much like we did with High Availability with DRS. That’s how we’re going to configure vSAN. And each of the hosts in that cluster can contribute physical storage capacity to that vSAN data store. So the end result is this one big data store made up of all the local storage that is being contributed by these ESXi hosts. And this one big vSAN data store is available to all of the hosts in the cluster. And beyond that, we’re also going to leverage SSD.

We’re either going to do 100% solid state drives or we’re going to do something called a hybrid configuration where we use SSD to provide a right buffer and a read cache. And we’ll take a look at that towards the end of this lesson. So, first off, number one, if we want to enable virtual sand, we have to create a host cluster. So we have to group together ESXi hosts. And once you create a host cluster, you can enable High Availability, which means if an ESXi host fails the VMs reboot on other hosts, you can enable DRS, which is going to migrate virtual machines from host to host to improve load balancing, and you can also enable vSAN. So when you decide to enable VCN, here’s basically how it works. All of your ESXi hosts are going to have some local physical storage. So you can see here, ESXi One has twelve hard disk drives and two solid state drives.

ESXi Two and ESXi Three each have six hard disks and one SSD. I can configure disk groups on those ESXi hosts to identify which storage devices should be contributed to my vSAN data store. And the end result is all of that physical storage capacity that I’ve identified is combined to form this big vSAN data store that you see here at the bottom of my diagram. So now all of my ESXi hosts have access to this shared data store, and they’re able to do things like V Motion.

They’re able to do things like High Availability, DRS because they’re all accessing a shared physical storage resource, even though I haven’t purchased a storage array. And the big underlying component that makes this possible is the VCN network. So what we’re going to do is we’re going to configure VM kernel ports, kind of like we talked about with V Motion and DRS. Anytime you have a feature like this where the ESXi hosts have to communicate with each other, it’s usually done using a VM kernel port.

And I’ll create a VM kernel port. I’ll market for VCN traffic. I’ll create a physical network between these hosts that can handle the storage workload and that is a critical piece of this. So here you can see I’ve set up 210 gig switches. I have to think of these physical switches as my storage network. They’re going to be carrying a lot of traffic and they need to be really, really fast. So that’s why I use ten gig switches to create an underlying network.

So that, for example, let’s say that a virtual machine is running on ESXi One. So here’s my VM. It’s running on ESXi one. The VM may have one of its virtual disks over here on ESXi Two. And so the virtual machine is going to be reading and writing to and from this virtual disk on some other physical ESXi host. And in order to make those storage commands fast, that’s basically the VM interacting with like it’s C drive for example. It’s got to be fast. So we’ve got to have the appropriate underlying physical network to support the speed we need. That’s why ten gig switches are recommended with vSAN. And so here we can see a little bit more detail about how these VM objects are stored. So for example, here on the far left we see VM One and VM One has a virtual disk.

You can see here a copy of that virtual disk exists on ESXi Two and on ESXi Three. This is very similar to Raid One. That data is being mirrored to multiple physical hosts. That way if one of these physical hosts were to fail, we wouldn’t experience data loss. Even if ESXi Two is permanently destroyed and we lose this data, there’s a mirror copy of that data resigning on another host. So that way my virtual machine can come back up right away using that mirror copy of that data. And also we’re not running the risk of losing valuable data if a single physical ESXi host goes down. So finally, I just want to take a moment to talk about the read cache and write buffer utilized with vSAN. This is one of the things that makes it so fast and helps support some of the use cases like for example, virtual desktop.

So here we see a virtual machine on the far left and the virtual machine has virtual disks on ESXi Two and ESXi Three. And let’s say the virtual machine needs to read some data from its virtual disk. That read is going to flow over the VCN network and hopefully the read can be satisfied by this SSD device. You see here, the SSD device is acting as read cache. It’s storing all of the most frequently used data in this little read cache. So most of the time when a virtual machine issues a read operation, it can be satisfied really quickly from that SSD if the data is not present in the SSD, that’s what we call a cache miss. And as you can see, this read operation is happening much more slowly because of the fact that the data wasn’t present in the SSD.

So this is what we call a VCN hybrid configuration where we’ve got a lot of physical capacity. And that’s the beauty of these hard disks is they’re relatively inexpensive and they can provide a lot of physical capacity. But we’re placing SSD in front of them as a read cache and a write buffer to help it feel more like an SSD storage solution to help it perform more like SSD more quickly. And the SSD also acts as a write buffer. So anytime the virtual machine needs to write data, it is writing to the SSD and the SSD subsequently will write that data to to the capacity device. So those are the basic concepts of VCAN. It’s primarily used to achieve shared storage without purchasing a physical storage array and it’s 100% managed through the Vsphere web climb. So now we’re managing our storage solution and our virtualization solution on all of our VMs.

  1. vSAN Features

So here we see a diagram showing us how hybrid vSAN works. And we’ve got this virtual machine and the virtual machine is running on ESXi One. And we’ve got a copy of the virtual machines virtual disk stored on both ESXi Two and ESXi Three. We’ve got that data mirrored. That way, in case there’s some sort of failure tier, the virtual machine won’t lose data. And so in this hybrid configuration, we’ve got hard disks and we’ve got SSDs. The hard disks are what we call our capacity tier. These storage devices are used for the long term storage of data, whereas the SSD, those are our cache tier, those are used to just speed things up. None of the data on those SSD devices is stored long term. So let’s take a look at how this works. Let’s say that VM One needs to read some data from its virtual disk. So VM one generates a read. The read flows over the VCN network and hopefully the read can be satisfied by the SSD. If the read cannot be satisfied by the SSD, that’s what we call a cache miss and it goes much slower. That’s how hybrid VCN works. It’s a combination of both traditional magnetic hard disks and SSD using the SSD as a cache.

Now, writes are a little bit different. There’s no such thing as a cache miss with a write. All of the write operations that my virtual machine performs the data is written to SSD as a write buffer and then subsequently that data is eventually written to the capacity devices in the back end. But to the virtual machine, the write speeds always feel like SSD because they’re always being written to an SSD write buffer. So the hybrid VCN configuration almost gives us the performance of an all flash storage array. And there are compelling reasons to choose vSAN hybrid mode. Probably the most compelling reason is because it’s less expensive.

Traditional hard disks still cost significantly less than SSD and they provide more capacity. So if you need a lot of storage capacity, using hard disks as the capacity devices will give you that. If you have workloads with a consistent data set, they are ideal. So if you have virtual machines that are consistently leveraging the same data, there’s a very good chance that that data will be stored in the read cache. And so therefore it will work really well with hybrid mode. And hybrid mode does work great with desktop virtualization because desktop virtualization is one of those use cases where you have a fairly consistent data set.

That being said, it doesn’t work as well as an all flash vSAN configuration. So with all flash, we’re going to eliminate those traditional hard disks and our capacity devices are going to be SSD instead. So basically what we now have is one really good SSD device in each disk group that’s going to act as my right buffer. There’s no such thing as a read cache anymore. I have one SSD that acts as a right buffer, and then in the background I’ve got a bunch of SSDs anywhere from one to seven that act as capacity devices.

So now all of my VCN storage is SSD, and I’m going to get the resulting increase in speed. Because of that, I’ll probably have less space. But VMware introduced some space saving features specifically for all flash, like deduplication and Rate Five and Raid Six. So there are ways to kind of eliminate some of that trade off. VMware is definitely trying to nudge people towards the all flash configuration because it’s substantially faster. And by introducing space saving mechanisms, the price point for all flash is starting to get closer and closer to the price point of hybrid. So eventually all flash is probably the direction that this will go and that most people will utilize. So why would we choose an all flash VCN configuration? Well, it gives us better performance for all use cases. It’s always going to perform better than a hybrid configuration. The cost for flash is steadily declining.

SSDs are becoming cheaper and cheaper, and we have the support for Raid Five and Raid Six that’s going to save us a significant amount of space versus using Raid One with a hybrid vSAN architecture. Deduplication and compression are also supported as well. So there are ways to save space with an all flash configuration that can bring that price point per gigabyte down. Finally, another feature supported by vSAN is encryption that is used to support data at rest. The data is encrypted after other operations, like deduplication, for example.

So those deduplication features are still supported. The data is encrypted after those operations are carried out. And now if somebody yanks a disk out of one of your ESXi hosts, all of that sensitive data is protected because it’s all encrypted. So if a device is moved, if a device is improperly retired, if a host is disposed of and isn’t wiped out correctly, we have some protection. And only users with administrator permission can perform these encryption and decryption.

img