Essential Guide

Browse Sections
Tip

Adjust these 4 server BIOS settings to improve performance

You can configure BIOS settings to get more productivity out of servers, but some high-performance settings could increase power consumption or put reliability at risk.

One way to maximize server hardware is to configure the machine to deliver the best possible performance. You can adjust the BIOS settings to increase server performance. However, note that doing so may increase power consumption and cause the server to run at a higher temperature.

A disclaimer about server BIOS settings

If it's possible to adjust a few BIOS settings to get better performance, that raises the question of why the manufacturer did not design the machine to default to the best possible BIOS settings.

In some cases, high-performance settings may affect the server's stability. In other cases, improved performance may increase the server's temperature, energy consumption, or both. Manufacturers seek to strike a balance between performance, energy use and reliability. You should remember that additional performance may come at a price.

Remember that every server is different. The make, model, architecture and age of a server all affect the BIOS settings that are available. Some of the following settings may not be available on every server.

Non-uniform memory access

Non-uniform memory access (NUMA) technology links a series of nodes together via a high-speed interconnect. The idea is that each CPU has its own built-in memory controller that directly links to memory that is considered local to that CPU.

A CPU can access memory within its own node (local) or within another node (remote). Local memory access is faster than remote memory access, because remote memory access requires a data transfer across a NUMA interconnect.

Node interleaving technology strips data across memory controllers and offsets the performance hit associated with remote memory access. Some systems automatically enable node interleaving within the system BIOS, but servers that act as virtualization hosts usually perform better with memory interleaving disabled.

This is due to the way a system determines local versus remote memory to a CPU. When an admin disables NUMA interleaving, the system creates a System Resource Allocation Table (SRAT). This table identifies the memory that is local to each CPU.

The system uses this information to use local memory whenever possible, because remote memory access involves crossing a memory interconnect, which increases both latency and resource contention. If an administrator enables interleaving, then the system does not create a SRAT. This makes the hypervisor unaware of the underlying NUMA architecture and use memory indiscriminately, regardless of its location.

If you disable interleaving, modern hypervisors usually take the best-effort approach to memory allocation. The hypervisor will try to allocate all of a VM's memory within a single NUMA node but uses multiple NUMA nodes if extra memory is needed and hypervisor configuration policies to allow NUMA spanning.

Power management

Few server BIOS settings have as big of an effect on overall performance as the power management settings. Many power management settings are vendor-specific, so you should check your server vendor's website for their recommendations.

The easiest power management feature to examine is demand-based scaling (DBS). DBS automatically adjusts the processor's clock speed to increase performance when additional processing power is needed and saves power during periods of low CPU usage.

Many servers control DBS through power management profiles. The default behavior is usually to let the OS control the processor frequency scaling, but this requires a bit of CPU overhead. Not all OSes support this power management type, which can be especially problematic for servers that run low-level hypervisors.

If you are trying to get the best possible server performance, look for a power management profile geared toward performance rather than power conservation.

Simultaneous multithreading

Many servers with Intel Xeon processors support simultaneous multithreading (SMT) technology. SMT is an Intel feature that makes the OS think that the CPU has twice as many cores as it does. SMT treats each physical core as two logical cores.

Intel claimed that SMT improves performance by as much as 30%, though SMT can hurt performance if you use the server as a virtualization host. This is particularly true for VMs that are only allocated a single logical processor or for infrastructure in which CPU cores are overcommitted.

Most servers that support SMT automatically enable this feature, but you can disable it within the server BIOS settings. You might decide to benchmark your server with SMT enabled, and then with it disabled, determine which setting yields the best performance.

Core speed

Another BIOS setting that affects a server is the speed of the CPU cores. Turbo Boost is one such feature, which is found on some Intel Xeon servers. Turbo Boost is similar to overclocking, in that it allows CPU cores to run at a higher clock speed when necessary. Intel's Turbo Boost Technology is based on two metrics: the base clock speed and the max turbo frequency.

The CPU is designed to run at its base clock speed, even if it can run significantly faster. This helps the server to save energy. If the server is running a CPU-intensive workload, then Turbo Boost dynamically increases the CPU clock speed on an as-needed basis, up to the max turbo frequency.

Turbo Boost tends to only increase CPU core frequency if the CPU consumes less than its rated power and operates below its rated temperature. If the temperature increases beyond a predefined threshold, then a thermal throttling feature reduces the CPU speed to bring the temperature back within a nominal range.

The actual amount of additional processing power that Turbo Boost yields depends on the number of active CPU cores, but it often provides two to three frequency steps.

If you consider using Turbo Boost, check to see that the BIOS C-state feature is disabled. C-state is a power-saving feature found on some Intel Xeon servers. It drops the voltage of CPU cores, which reduces the core frequency.

When you reduce the frequency of one core, the system reduces the frequency of all active cores on a CPU. If you try get the maximum processing power from your server, you should avoid any configuration that could result in cores that run at a reduced frequency.

There are three different versions of Intel's Turbo Boost Technology, with the first release in 2011. Most servers in use today support Turbo Boost Technology 2.0.

Launched in September 2019, Intel's Turbo Boost Technology 3.0 is like version 2.0 in most regards. The biggest difference is that Turbo Boost Technology 3.0 can boost the clock speed of individual cores, while version 2.0 requires all cores to run at the same speed. According to Intel, Turbo Boost Technology 3.0 can increase a thread's performance by up to 15%.

Dig Deeper on Data center ops, monitoring and management