Examine the major vSphere 6.7 updates you'll see in U3
VMware's most recent update to vSphere -- vSphere 6.7 U3 -- supports several technologies such as Zen 2 processors and includes features such as vmx-15.
VMware vSphere 6.7 is the most recent evolution of the popular virtualization platform. VMware continues to enhance vSphere's security, management and ease of use by introducing features such as ESXi Quick Boot and Enhanced vMotion Compatibility.
VMware vSphere is VMware's suite of server virtualization products. The suite includes components such as ESXi -- the hypervisor -- vCenter Server, vSphere Client, vMotion, Distributed Resource Scheduler and Host Profiles. Version 6.5 focused heavily on security, load balancing, fault tolerance and High Availability, but the vSphere 6.7 updates include even more.
VMware vSphere 6.7 features a modern HTML5 interface that includes a dark mode and feature parity with the older Flash-based web client. VMware also shifted from a Windows- to a Linux-based vCenter Server, which enables VMware to stay in control of vCenter's core OS and the applications that run on it. VMware's vSphere 6.7 updates focus primarily on the security and reliability of the platform.
AMD EPYC Generation 2 support
The vSphere 6.7 release supports the second generation of Advanced Micro Devices (AMD) EPYC processors, called Zen 2.
Zen 2 enables you to build competitive pricing and performance systems. For the first time in its history, AMD has used the 7 nm manufacturing process for its server CPUs. All AMD EPYC processors are composed of four eight-core Zeppelin dies in a multi-chip module.
Virtual hardware version 15
VMware first introduced virtual hardware version 15 (vmx-15) in vSphere 6.7 U2.
This change makes your VMs more powerful. You can now configure VMs with up to 256 virtual CPUs, compared to earlier vSphere editions that only enabled you to configure VMs with up to 128 vCPUs.
Vmx-15 only works on vSphere 6.7 U2 or higher, so you must ensure you upgrade all your hosts to at least U2 to power on your VMs and take advantage of what vmx-15 offers.
If you don't upgrade, you can't power on VMs configured with vmx-15.
ESXi Quick Boot
VMware vSphere 6.7 introduces the Quick Boot feature, which enables a system to reboot in less than two minutes. It doesn't re-initialize the physical server BIOS, and it speeds up cluster-wide maintenance operations such as hardware upgrades and patches.
However, only certain hardware systems support Quick Boot, and Quick Boot doesn't work with systems that already have ESXi Secure Boot enabled.
You can enable Quick Boot via vSphere Update Manager.
Multiple NVIDIA GRID vGPUs per VM
With vSphere 6.7 U3, you can configure multiple NVIDIA GRID virtual GPUs (vGPUs) per VM to enable more graphics- and compute-intensive workloads to run on vSphere at the same time. You can have up to 4 vGPUs connected to one VM.
You can simply suspend and resume workloads running on GPUs instead of having to power off those VMs. This enables you to perform tasks via the maintenance window without migrating VMs.
Persistent memory devices
VMware vSphere 6.7 also supports storage devices that use persistent dynamic RAM, known as non-volatile dual in-line memory module (NVDIMM) devices.
This new storage technology has a lower latency and higher speed compared to the storage devices that came before it. Persistent memory (PMem) appears in vSphere as vPMemDisk, which the system treats a little bit like a datastore. You can configure PMem as a virtual NVDIMM, which vSphere presents directly to guest OSes that use NVDIMM devices.
VCSA improvements
VMware shifted from Windows to vCenter Server Appliance (VCSA) several major vSphere releases ago. However, VMware struggled to reach feature parity between those two platforms at first. With recent updates to VCSA, vCenter on Windows now lags behind it.
VMware plans to release the next version of vSphere without support for the Windows platform, so vSphere 6.7 U3 is the last version you can install vCenter Server on Windows on.
Existing customers can migrate to VCSA with the built-in migration tool.
VCenter architectural enhancements
VMware simplified the architecture for all vCenter Server services running on a single instance. So you can deploy architectures with an external Platform Service Controller (PSC) in a depreciated way. PSC runs several services, such as certificate management, authentication and licensing.
VMware added support for Enhanced Linked Mode (ELM), which enables you to link different vCenter servers and manage them via a single console. ELM gives you the ability to view and search across all linked vCenter Server systems and also replicate roles, permissions, licenses, policies and tags. You can now manage the whole system from any location.
In the past, vSphere required you to deploy VCSA with vCenter Server and separate PSCs. Now, you can use the new VCSA convergence functionality in the vSphere Client to move from an external PSC deployment to an embedded PSC.
VCenter Server PNID change
VMware introduced several enhancements and new features to VCSA in vSphere 6.7. One such enhancement is the ability to change network names and other networking details through the vCenter Server Management UI. Previously, when you first deployed a new VCSA and assigned it a network name -- or fully qualified domain name (FQDN) -- vSphere required you to keep that name. The ability to change the FQDN helps support mergers and acquisitions where server naming conventions might change.
You can also now change the Primary Network Identifier after installation. This enables you to support different migration scenarios.
Enhanced vMotion Compatibility for VMs
VMware also improved Enhanced vMotion Compatibility (EVC) in vSphere 6.7. You can now configure EVC on a per-VM basis rather than on an ESXi-host basis. This means you can enable VMs to take advantage of CPU-specific features and restrict VMs' use of CPUs in your cluster that have specific features.
This applies to VMs that require the latest CPU extensions in Intel or AMD processors.
Traditional EVC -- which works on a per-host basis and enables you to mix different hardware within a single cluster -- still works as it previously did.