Meet Hyper-V cluster requirements to maintain high availability
Hyper-V high availability is important for failover, so check that Hyper-V clusters meet the proper requirements, which range from hardware management to network configuration.
Meet Hyper-V cluster requirements, such as matching Hyper-V node servers and CPU architectures, to ensure high availability for failover.
Hyper-V clusters can provide high availability for VMs. As such, IT administrators should use clusters for almost any Hyper-V production deployment. Smaller shops might instead consider using VM replication, which derives a slightly lesser degree of availability for a fraction of the cost.
Unlike competing hypervisors, Hyper-V doesn't include a native clustering feature. Instead, clustering is a function of the Windows Failover Clustering feature. In other words, admins must create clusters at the OS level, not the hypervisor level. Even so, Windows failover clusters are designed to make Hyper-V VMs highly available.
Hyper-V cluster requirements aren't as rigid as they once were, but it's still important for admins to pay close attention to hardware needs or else the cluster might not function properly.
Set up nodes to meet Hyper-V cluster requirements
Technically speaking, Hyper-V cluster requirements don't call for Hyper-V nodes to be identical to one another. Even so, Microsoft strongly recommends admins use matching servers for cluster nodes. After all, it would be tough to manage a cluster if the nodes included different amounts of memory or if a cluster had some nodes that performed better than others.
At the very least, Hyper-V cluster requirements need nodes to have matching CPU architectures. For example, if a VM runs on a node that contains an Intel CPU, admins won't be able to fail the VM over to a node with an AMD CPU.
Not only should the CPUs in the nodes be from the same vendor, they should also match one another in terms of CPU generation and core count. Of course, the nodes must also support hardware-assisted virtualization -- one of the fundamental Hyper-V cluster requirements -- and the nodes should pass basic cluster hardware validation tests.
Network adapters installed in the cluster nodes are another consideration for meeting Hyper-V cluster requirements. Admins should avoid having a single point of failure. As such, it's relatively common to provision cluster nodes with teamed network adapters.
Network interface card (NIC) teaming causes the OS to treat multiple physical NICs as a single logical NIC. Doing so enables protection against a NIC failure -- depending on the team configuration and bandwidth aggregation. Bandwidth aggregation capabilities improve performance by distributing traffic streams across multiple physical adapters rather than relying on a single physical NIC to handle all of the network traffic.
Most failover clusters rely on a Cluster Shared Volume (CSV). A CSV acts as shared storage for the cluster. In a Hyper-V deployment, the virtual hard disks and other VM-related files reside on the CSV.
A VM can run on any node because all of the cluster nodes have access to the CSV. If a failover occurs, there are no virtual hard disk files to copy because every node has access to the same CSV. Instead, another node in the cluster can take ownership of the VM object and the VM can continue to work from the same virtual hard disk files it used before.
Attach cluster nodes with the right connectivity option
Although there are other options, most organizations attach their cluster nodes to a CSV via iSCSI or Fibre Channel connectivity.
ISCSI sends native SCSI storage commands across an Ethernet network. Compared to Fibre Channel, iSCSI is relatively easy and inexpensive to set up. If admins decide to use iSCSI, then each cluster node will need a NIC -- or, more likely, a NIC team -- that admins dedicate solely to handling iSCSI traffic. Using a common network segment to handle both iSCSI and general network traffic is dangerous from a security standpoint, and it can also damage performance. An iSCSI network needs to be fast enough to meet the storage I/O demands of all the VMs.
Fibre Channel generally provides better performance than iSCSI, but smaller shops tend to avoid it because of its cost and complexity. If admins choose to use Fibre Channel -- or even Serial-Attached SCSI -- then they must outfit each cluster node with an identical host bus adapter (HBA) and ensure that each HBA runs an identical firmware version. By taking these Hyper-V cluster requirements into account, admins can ensure their clusters are properly set up and prepared for failover.