Sergey Nivens - Fotolia

Tip

Sizing and scaling VMware datastores 101

When deploying vSphere, you must know how to size and scale your datastores for optimal VM efficiency. Discover what options you have and how best to use them.

When you deploy VMware vSphere, one of the first design questions you should ask is: "How big should I make my datastores?" As with most design decisions, the answer depends on the VM and application requirements.

VMware vSphere 6.0, 6.5 and 6.7 and modern storage systems have removed some of the technical constraints that made accommodating certain workloads difficult. If you use Virtual Volumes (VVOLs), then you might have even fewer storage design tasks to worry about. Luckily, most storage vendors offer all-flash arrays or wide striping that significantly reduces the requirements for storage performance design.

Still, when you create VMware datastores, you must understand the way scaling can affect performance and plan accordingly.

Size your VMs

First, you should design your VMs to accommodate your applications. Every VM needs a boot disk, which uses tens to hundreds of gigabytes, depending on the OS and applications. You can increase virtual disk size later, but you can only scale the last OS partition on the disk. This is because the system adds new space to the end of the disk, so you can only grow the partition into that space.

On Windows, you only have a single drive letter on each virtual disk. The boot disk contains the C: drive. Some VMs don't hold any extra data, such as those that house web servers that refer to external databases for their data. These VMs usually just have the boot virtual disk. Other types of VMs hold more data, which the system places on additional virtual disks. For a large file server, you might use a single large virtual disk to keep all the file share data. However, for a database server, you usually need multiple virtual disks because each disk has its own queues, which translate to higher potential database performance.

VMware vSphere 6.0, 6.5 and 6.7 enable virtual disks up to 62 TB, which is sufficient for almost any VM use case. Once you know the total size of all of a VM's disks, add its configured memory size and a gigabyte for miscellaneous files to determine its total disk requirements. You might consider using a planning spreadsheet that totals this requirement for every VM you expect to run.

Size your VMware datastores

When designing storage for vSphere, figure out your VM and application requirements first. When using fully allocated -- or thick-provisioned -- virtual disks, calculate the required capacity for each VM, and map those VMs to VMware datastores. If the maximum size for a vSphere virtual disk (62 TB) or datastore (64 TB) creates a problem for your design, then you should review your design, because you might be doing something wrong. The disk consumes its maximum size from the datastore from its creation, even if it contains no data. The provisioned size is always the same as the consumed size.

Once you know your VMs' disk space requirements, you can design your datastore. Keeping the entire VM on one datastore means the VM only depends on that datastore, which simplifies VM management. You rarely encounter cases where a single datastore can't deliver the capacity or performance your VM requires.

There isn't one single datastore size or VM number per datastore that works for everyone, or even one that works for every datastore in your data center.

So, why not create the largest datastores you can and pile all your VMs on the lowest number of them? The short answer is queueing. Each datastore has a limited queue, which puts a ceiling on datastore performance. Small datastores mean more queues, which means higher potential disk performance.

There is a second reason for creating smaller VMware datastores: the blast radius for an administrative error. If someone deletes a datastore with 50 VMs on it, it creates a much bigger problem than if someone deletes a datastore housing only five VMs. It takes far less time to recover a 5 TB datastore than to recover a 64 TB datastore.

However, more datastores mean more administration, performance and capacity management. There isn't one single datastore size or VM number per datastore that works for everyone, or even one that works for every datastore in your data center. The key is balancing performance and manageability. In addition, keep some headroom in your datastores. The default out-of-space alarms trigger orange at 20% free space and red at 5% free space. VMware datastores should routinely run with at least 20% free space.

Thin provision your virtual disks

You can also choose to thin provision your virtual disks. Thin-provisioned virtual disks start tiny -- a few megabytes -- and grow as your VM writes data to them. By default, VMs only see the provisioned disk size rather than the consumed size on the datastore. Thin provisioning enables you to tell VMs that you have more disk space.

Thin-provisioned disks grow over time, but you risk the virtual disks growing until they fill the whole datastore. When this happens, VMs stop running. Thin provisioning enables you to start with less datastore space than thick provisioning, but you must manage free space on the datastore carefully and grow your VMware datastores as the thin disks grow.

VVOLs makes it easy

If your storage supports VVOLs, you can simplify your storage process. VVOLs doesn't use datastores. Instead, VVOLs maps each virtual disk to an object on your array.

VVOLs uses Storage Policy-Based Management, which communicates the VM's storage availability, performance and data protection requirements to the array. The array administrator sets the available policies, and the vSphere administrator assigns the policy to each VM.

With VVOLs, you don't need to worry about sizing or VMs per datastore. VVOLs aims to hand storage management back to storage administrators.

Next Steps

Design your datastores with storage performance in mind

Dig Deeper on VMware ESXi, vSphere and vCenter