How many VMs per host is too many?

The ultimate host server welcomes hundreds of virtual machines. But that doesn't mean consolidation should outrank performance.

Can your server handle the number of virtual machines it's hosting?

Hosting virtual machines (VMs) en masse is increasingly easier as hardware underpinning server virtualization improves. But with memory, CPU and scheduling limitations, how much is too much?

We asked three IT pros how many VMs per host they've seen, and how well it worked. While you can conceivably cram more than 500 VMs on one server host, sometimes less is more. Risk, utilization rates and memory factor into the decision.

Virtualization doesn't just consolidate as many servers as possible -- it has to actually do something. You could share one core between a VM with 1 MB of memory and two other VMs, but there's no point because the VM performance would suffer. More than three VMs per core causes scheduling overhead, among other issues. This doesn't mean paltry consolidation numbers, however. A high-end server using a 15-core Intel Xeon E7 processor yields 60 available cores. Ideally, it could host 180 VMs. With enough memory and I/O for the workload, this is realistic, as one IT pro, formerly with GitHub, pointed out.

A lesson in utilization

Ian Kaufman, research systems administrator at UC San Diego, Jacobs School of Engineering: We have an extensive VM infrastructure on eight hosts with 256 GB of RAM [random access memory] and fast CPUs. We also have NFS [Network File System] storage on a NetApp array with 10 Gbps connectivity and 256 GB of flash cache. With VMware ESXi 5.X, we run a maximum of 24 VMs on each node, usually working with about 15 VMs per host. We see a scant 3% to 5% CPU utilization, and 7% to 11% RAM utilization.

We could comfortably put 48 VMs or more on a single server host and barely see a dent in capability, but lower utilization rates allow us to spring into new projects easily. We also distribute VMs to facilitate automatic failover in the event that a host goes down. We can patch and update hosts without bringing any VMs down.

The VMs are Web servers for the most part, though we do have some interactive login machines (both Windows and Linux), as well as some MySQL databases. Nothing is too compute-intensive, versus a number-crunching application's VMs, for example.

During a hardware upgrade, we were able to put all 125 VMs on two nodes while we migrated to the new equipment. Even splitting up the entire inventory on two nodes, we still would have barely taxed our VM infrastructure.

Is the risk worth the reward?

Tips on virtualization management

Five steps to better VMs

Avoid VM sprawl

The right provisioning path

Brad Maltz, office of the CTO of Lumenate, a technical consulting firm: In a virtual desktop infrastructure [VDI], I have seen about 150 to 200 VMs on one server. The majority of large VMs lived on quad-socket rackmount servers. With VDI, there are different consolidation ratios than for other workloads.

I have also seen around 80 VMs on one server, because that company wanted deep consolidation; it was a management issue. Putting this many VMs on a server host worked well, but the real question is risk: Does a business want to risk putting 80 VMs on one host? What happens if the server goes down? Can you afford to lose 80 VMs at once?

Leave room for the future

Adam Fowler, IT operations manager at Piper Alderman: The most VMs I have seen on a single host is 31, virtualized with Microsoft Hyper-V on Cisco UCS blades that offer 256 GB RAM and two eight-core Intel E5-2665 CPUs. Storage is a fiber-connected EMC SAN [storage area network] with SSD [solid-state drive] caching.

Since we are nowhere near the ceiling on RAM, CPU or I/O utilization, 31 is not the limit on these server hosts; however, we have flexibility to do what we want later. Some servers are high usage, while others are low. We can move workloads around on our six UCS hosts when patching or rebooting.

Dig Deeper on Data center ops, monitoring and management