Brian Jackson - Fotolia
What are SR-IOV pros and cons with Hyper-V?
SR-IOV in Windows Server 2012 R2 supports Hyper-V Live Migration, but system requirements hampered adoption for some cases. Should everyone put PCIe SR-IOV to use?
SR-IOV lowers latency and boosts throughput, but it won't improve every Windows Server deployment.
Single-root I/O virtualization (SR-IOV) for PCIe connections on Windows Server 2012 can lower processor utilization and network latency by up to 50% and boost network throughput up to 30%.
SR-IOV works by bypassing layers of communication between I/O peripheral devices, such as a PCIe network card, and the hypervisor managing the virtual infrastructure.
The benefits of SR-IOV
Better network performance means more virtual machines per server or heavier I/O workloads without additional investments in network infrastructure. SR-IOV can also benefit latency-sensitive network traffic like transactional data or streaming media.
SR-IOV supports Microsoft Hyper-V Live Migration, even without SR-IOV at the remote (target) ends, so it is possible to migrate a VM from a Hyper-V virtualized SR-IOV server to another server regardless of its SR-IOV support. Organizations can experiment and deploy SR-IOV platforms in an organized and systematic manner without requiring a complete commitment.
SR-IOV is developed and maintained by the PCI-SIG, embraced by hardware makers such as Intel with its VT-d, and supported by leading OS vendors such as Microsoft.
Cons to consider for SR-IOV
SR-IOV faces some limitations around Windows Server system requirements. To deploy SR-IOV requires support from the PCIe card, the motherboard and BIOS and the Hyper-V hypervisor. Without all components enabled for SR-IOV, it cannot make virtual functions (VFs) available to VMs. Some organizations may need to wait until the next server technology refresh to update server hardware and PCIe cards with SR-IOV support.
SR-IOV can easily overtax a port that is shared too broadly. The maximum theoretical number of VFs per network port is 256, but the actual workable maximum will be much lower, due to the essential overhead needed to operate the network ports. SR-IOV PCIe connections don't add capacity to the existing hardware, they only provide a means of sharing the device among VMs. A 1 gigabit Ethernet (GbE) port may practically support up to eight VFs, and a 10 GbE port may support up to 64 VFs. Demanding workloads can easily reduce these numbers, so test out sharing and measure workload communication performance before adding SR-IOV to production platforms.
VM portability may also become a problem with PCIe SR-IOV use. Since SR-IOV basically allows a VM to interact directly with a PCIe device, the hypervisor must be able to move workloads between SR-IOV and non SR-IOV platforms if it is going to migrate a VM to another server. Workloads that depend on the direct interaction may malfunction when a migration is initiated.
Finally, consider SR-IOV's effects with the external network switch and traffic. A server can identify and manage many VMs, but the switch ports used by the server have no way to distinguish virtual function traffic. For example, traffic from one VF may be destined for another VF on the same physical port, but the external switch does not know to return traffic to that same port. This can result in switching problems or troubleshooting confusion. Involve the IT network architect in any SR-IOV deployment.
With virtualization all but standard on data center servers, the focus is now shifting to optimization; technologies like SR-IOV mitigate any latency introduced by the hypervisor's abstraction layer and enhance system performance to almost bare-metal levels.