Ronald Hudson - Fotolia
Single-socket servers offer cost, utilization benefits to admins
Different server types bring advantages in processing speed, memory availability and overall cost. See what single- and dual-socket servers can add to your infrastructure.
For certain data centers, single-socket servers offer a more economical, resource-specific alternative to established dual-socket servers. Increasingly, organizations of all sizes can spend less to deploy single-socket 24+ core servers and achieve the same performance levels as dual-socket systems with eight to 12 cores per socket.
In many instances, companies find that they have an efficient number of cores in one socket to meet the demands of technologies such as IoT and AI.
In addition to improved configuration possibilities and better power usage per rack, businesses gain additional cost savings through improved socket-based software licensing. Research from Gartner indicated significant cost reductions for core-licensed software, such as VMware vSAN as well as Windows applications.
The evolving data center
Dual-socket servers are a mainstay of data centers. Multicore CPUs, first introduced in the mid-2000s, have further reinforced the dual-socket standard and have since expanded to include 28- and 32-core and beyond. Increased CPU cores and multithreading ultimately helped build large scale-out systems and diversify applications for businesses and end users.
There are some key limitations of dual-socket servers. In terms of performance scaling, the non-uniform memory access link between two sockets has generally become a bottleneck as multicores on each socket try to share resources across the NUMA link, such as memory and I/O. Plus, due to inefficient CPU resource usage, older dual-socket servers are heavily underused in data centers.
Lack of resource efficiency is partly due to newer CPU capacity and the technology's incompatibility with recent innovations, including increased PCIe lanes for modern networking and significantly more RAM availability. Powerful dual-socket servers also increase rack power density and create hotspots, as well as unused rack space in enterprise data centers.
Finally, individual processor performance has largely maxed out; processors are becoming smaller -- not necessarily faster. And that's helped to drive the interest in single-socket servers.
Single- vs. dual-socket: Assessing advantages
Organizations regularly over-purchase processing in the form of dual-socket servers because they must meet the core counts and memory requirements for newer use cases and workloads. This model regularly results in underutilized capacity and a mismatch between the resource needs of data-intensive use cases, such as virtual reality, machine learning and data center infrastructure design.
Single-socket servers take advantage of technical advances in server processing and form factor technology. For example, a 1-rack unit, single-socket server can provide the I/O and compute power necessary to handle data-intensive workloads, access software-defined storage, expand virtualization capabilities and maintain edge setups.
This can bring savings on software license fees, and IT departments can use the improved thermal properties of single-socket servers to limit power consumption and reduce energy costs. Moreover, they can deploy these cool-running servers in other external locations and confined spaces, such as edge data centers.
Pushing beyond dual-socket limitations with CXL and Gen-Z
The costs, energy usage and performance possibilities of single-socket servers are considerable benefits, but new innovations such as Compute Express Link (CXL) and the Gen-Z Consortium's interconnect help address dual-socket limitations.
The CXL initiative supports an open ecosystem that enables faster data center server hardware as well as related enhancements. It accelerates the rapid expansion of workloads by providing improved interconnects and better memory coherency for data-intensive initiatives.
CXL standards dovetail with the Gen-Z fabric, a memory protocol (read/write) that increases high-speed connectivity to processors and accelerators, and also boosts memory expansion. Gen-Z fabric helps ensure cooler, faster data center systems and provides ready access to more memory. It lets admins copy data between memory domains (node-to-node or CPU-to-accelerator), which improves performance and reduces power consumption.
The Gen-Z open systems interconnect can deliver high efficiency, increased bandwidth and low latency. It provides rack-scale expansion beyond the server chassis, which helps to overcome challenges with the existing CPU-memory relationship and works to eliminate system bottlenecks.
Gen-Z fabric relies on the CPU-memory load/store language to improve performance through unification of communication paths. This approach ensures more efficient data transfers to offload compute resources, optimizes interconnect usage and reduces software overhead.
The future of single-socket servers
For organizations that don't require resource-intensive processing, single-socket servers can offer savings and reduce expenses. That's especially relevant as companies contemplate server hardware refreshes and look to adopt newer advanced technologies.
Single-socket systems add significant hardware features, such as memory partitions and network protections, and can provide higher levels of security. The combination of a lower total cost of ownership and more efficient data center resources, in terms of increased compute power and reduced energy costs, can certainly benefit organizational operations.