Tip

Optimizing server energy efficiency

Reducing data center energy consumption is a hot topic in the IT industry, but server energy efficiency has only recently become a focus of discussion. Improving server efficiency means optimizing the main components that impact a server's overall power consumption: power supply, fans, CPU, memory, hard drives, I/O cards and ports, and supporting chipsets.

Data center energy efficiency is the hot topic of the day. IT operators are working to quantify and improve the efficiency of their data centers, and that means improving server energy efficiency as well.

Of course we all want the fastest, most powerful servers for our data center. Although energy efficiency (green!) is the buzzword, it seems that historically we think about energy usage only when our power or cooling systems are maxed out and need to be upgraded.

In the rush to optimize, virtualize and consolidate in the name of making computing-related operations more effective and efficient (and, of course, green), we've heard many server manufacturers profess that their products provide the most computing power for the least energy. Only recently have server manufacturers begun to discuss or disclose the efficiency of their servers. Currently there are no real standards for overall server energy efficiency.

There are several key components that impact the total energy consumed by a typical server.

  • Power supply
  • Fans
  • CPU
  • Memory
  • Hard drives
  • I/O cards and ports
  • Other motherboard components -- supporting chip sets

These components exist in both conventional servers and blade servers, but in the case of blade servers, some items -- such as power supplies, fans and I/O ports -- are shared on a common chassis, while the CPU and other related motherboard items are located on the individual blades. Depending on the design of the blade server, the hard drives can be located on either the chassis or the blades.

In addition to the components listed above, OS and virtualization software impacts the overall usable computing throughput of the hardware platform.

Don't judge a server by its nameplate

When we need to know how much power the server requires, we usually turn to the nameplate. However, the nameplate simply represents the maximum amount of power the unit could draw, not what it actually draws. Let's examine where power goes and what it really costs to operate a server. We don't always stop to think what it costs to operate a "small" server that typically consumes 500 W of power. That server also needs to cool 500 W of heat load (approximately 1,700 BTU). The typical data center has a power usage effectiveness (PUE) of 2.0, which means that it uses 1 W to support (power losses and cooling power) each watt of "plug power" to the IT load itself. This means that it takes 1,000 W, or 1 kW, of power for the data center to run a small 500 W server.

A single kilowatt does not sound like much in a data center until you factor in that it is consumed continuously (the proverbial 7/24/365), which adds up to 8,760 kWh per year!

At 11.5 cents per kWh, 1 kW costs $1,000 per year. (Of course, 11.5 cents is just an average, and in many areas the cost is much higher). Over a three-year period, that one "small" 500 W server can cost $3,000 or more just in energy consumption. In fact, since many of these small servers cost less than $3,000, you can see why some analysts have predicted that the power to operate a server will exceed the server's price, especially as the cost of energy rises.

Let's examine where the power goes and what we can do to optimize it.

Power supplies
The power supply is, of course, where power enters the server and is converted from 120-240 V AC to 3.3, 5 and 12 V DC. Until recently, efficiency numbers were unpublished. In fact, the Environmental Protection Agency Energy Star Program, which mandated that all PCs have power supplies of at least 80% efficiency, specifically exempted servers! This is one area where a few extra dollars spent to purchase a server with an 80% or greater efficiency rating can pay back large returns in energy cost savings over the estimated operational three- to five-year life of the server.

The difference between a 70% and an 87% efficient power supply results in a 20% overall energy savings for server power usage (assuming that same internal server load), which also means a similar range of overall energy reduction for the data center.

Moreover, these efficiency ratings are usually only provided at the power supply maximum-rated load. That does not reflect the actual loads the server will operate at in production. Typically, a server only draws 30% to 50% of the maximum power supply rating (the number on the nameplate), which means that fixed losses in the power supply will result in less than the rated power supply efficiency value at full load. Moreover, since we also want redundancy to improve uptime, we typically order servers with redundant power supplies. These redundant power supplies normally share the internal load, resulting in each power supply only supplying half of the actual load, which means that each power supply only runs at 13% to 25% of rated load. This means that the fixed losses are a greater percentage of the actual internal power that is drawn by the internal server components.

When buying new servers, the least expensive unit may not be the best choice, even if the computing performance specifications are the same. When specifying a new server, this is one of the best places to start saving energy. If the server vendor doesn't publish or can't provide the power supply efficiency, think twice about whether the server is a good value.

In fact, if you're shopping for a large number of servers, it pays to invest in testing the total power drawn by the manufacturers and models you're considering, specifically when loaded with your OS and applications, both at idle and a full-computing load. By spending an extra $50 to $100 on a more efficient server now, you may save several hundreds of dollars in total energy costs over the life of the server. Moreover, it may not be necessary to upgrade your power and cooling infrastructure.

Another method that can save about 2% to 3% in energy usage is to operate the servers at 208 or 240 V instead of 120 V, since power supplies (and the power distribution system) are more efficient at higher voltages.

Server fans
After the power supply, server fans are the heaviest consumers of power (other than the computing-related components themselves). As servers have become smaller and commonly pack several multicore CPUs in a 1U high server, multiple small, high-velocity fans are needed to move a sufficient amount of air through the server. They need to push air through very small restrictive airflow spaces within the server as well as through the small intake and exhaust areas at the front and rear of the server chassis. These fans can consume 10% to 15% or more of the total power drawn by the server. The fans draw power from the power supply, thus increasing the input power to the server, again multiplied by the inefficiency of the power supply. In addition, most or all the airflow in 1U servers is routed through the power supply fans since there is virtually no free area on the rear panel to exhaust hot air.

To improve efficiency, many new servers have thermostatically controlled fans, which raise the fan speed as more airflow is needed to cool the server. This is an improvement over the old method of fixed-speed server fans that run at maximum speed all the time, but these variable-speed fans still require a lot of energy as internal heat loads and/or input air temperature rise.

For example, if the server internal CPUs and other computing-related components draw 250 to 350 W from the power supply, the fans may require 30 to 75 W to keep enough air moving through the server. This results in an overall increase in server power draw as heat density (and air temperature) rises in the data center. In fact, studies that measured and plotted fan energy use versus server power and inlet air temperatures show some very steep, fan-related power curves in temperature-controlled fans of small servers.

CPU efficiency
The CPU is the heart of every server and the largest computing-related power draw. While both Intel and AMD offer many families of CPUs, all aimed at providing more computing power per watt, the overall power requirement of servers has continued to rise along with the demand for computing power. For example, the power requirement for the Intel CPU varies from 40 to 80 W for a Dual-Core Intel Xeon Processor to 50 to 120 W for a Quad-Core Processor, depending on version and clock speed. As mentioned previously, many servers are configured with two, four or even eight dual- or quad-core CPUs. Naturally, we all want the fastest servers we can buy and hope they have a three-year usable life before the next wave of software or applications overwhelms them.

It has been well documented that the average CPU is idle over 90% of the time and only hits peak demand for very short periods, yet continuously draws a substantial portion of its maximum power requirement 24 hours a day. Moreover, even when servers are equipped with power-saving features in their hardware and software (as most are), these features are usually disabled by server administrators.

One of the primary goals of virtualization is to decrease the number of servers that are mostly running at idle and consolidate their functions and applications to fewer, more powerful servers that run a higher average utilization rate.

The number and types of CPUs you choose will ultimately depend on the performance requirements and computing loads your applications face. By trying to match the computing load with the number of CPUs and their performance capabilities, you will optimize the efficiency of each server.

Memory efficiency
When specifying the configuration of a server, memory is often overlooked as a factor that determines the overall actual power usage.

Memory chips vary widely from vendor to vendor and their power consumption is usually not well documented. Generally speaking, the more memory there is per chipset module, the lower the power per gigabit of memory. Also, the faster the memory is, the more power it draws (this is tied into the speed of the server's memory buss and CPUs)

Example: A major manufacturer's server power estimator tool shows the following power directly attributable to memory for a 1U server equipped with a 5160 3.0 GHz CPU with 1.333 GHz FSB.

Total memory (GB) Memory module size Number of modules Watts Watts per GB
8 1 8 64 8.00
8 2 4 40 5.00
8 4 2 22 2.75
16 2 8 81 5.06
16 4 4 44 2.75
32 4 8 89 2.78

Ideally, get as much memory as your application needs but do not maximize server memory based on the belief that you can never have too much memory. Over-specified, unused memory increases initial costs and draws unnecessary power over the life of the server. Even though they sometimes cost more per gigabit, larger, more energy-efficient memory chips can lower the amount of power consumed over the life of the server. Also, there are more sockets open if you need to add more memory in the future.

For example, simply using 4 GB modules rather than 1 GB modules consumes 42 W less per server. This saves $84 of energy costs per year, or $252 in total energy cost savings for the three-year typical life of the server (based on 42 W memory energy, plus 42 W of infrastructure support power, at a PUE of 2.0).

Hard drives
The capacity, physical density and energy efficiency of hard drives have outpaced the performance increases of many other computing components. We seem to have an insatiable appetite for data storage, which means that it is almost a zero-sum gain. However, the power required by the newer, small form factor 2.5-inch drives is fairly low when compared with full-size 3.5-inch drives of a generation ago. (Remember when the typical server drive was 1.25-inch half-height and 5.25 inches wide -- seems like it was just the last millennium!)

Also, since the magnetic density of the media continues to increase per platter, larger-capacity hard drives use the same energy as smaller-capacity drives (assuming the same drive type). For example, when in use, the enterprise 2.5-inch Seagate Savvio 15K RPM drive consumes approximately 10 W, and 6 W when idle. The 36 GB and 72 GB versions use the same power. Spindle speed has a direct effect on power efficiency in the same class of 10,000 RPM drive. Both 146 GB and 300 GB drives consume 7 W when in use and 3.5 W when idle. Unless you have a specialized application that requires faster disk response, the 10,000 RPM drive offers far more storage per watt for general-purpose storage. Consider using the lower-power drives when possible -- the power savings add up.

Hard drive chart: Seagate Savvio enterprise 2.5-inch drives

RPM Drive size (GB) Idle watts Active watts GB per idle watt GB per active watt
15,000 36 6 10 6 3.6
15,000 76 6 10 12 7.2
10,000 146 3.5 7 42 21
10,000 300 3.5 7 85 43

Recently, solid-state drives (SSD) for notebooks have increased in capacity to as much as 512 GB and have started to come down in price. They'll soon make inroads to the server market, resulting in even more energy savings, especially when compared with 15,000 RPM drives.

Of course, check with your server vendor to see what your OEM drive options are.

I/O cards and ports
While most IT employees don't consider how much power is drawn by network interface cards (NICs) and I/O cards, they present an opportunity to save several watts per server. Some servers come with embedded cards, while others use add-on cards or a combination of both. The chart below shows the range of power used by the cards. Check the specs on your card of choice. It is either displayed in watts or shows current draw and voltage. To calculate the power in watts, multiply amps by volts (i.e., 1.2 A x 5 V = 6 W).

I/O card chart (watts per card)

Device Low High Watts saved
Network card – Gigabit Ethernet (GbE) 3.3 22 7
Network card – 10 GbE 10 25 12
RAID controller 10 24 14
Fibre Channel 5 20 10

When selecting an NIC, we tend to want the fastest throughput and often fail to consider power usage. For example; Intel makes several NICs ranging in power from the Intel PRO/1000 PT, which draws only 3.3 W, to a 10 Gb Dual Fiber XF card, which draws 14 W. In the case of OEM server NICs, a major manufacturer's power estimator tool indicates 22 W for its OEM PCI GbE card. Since many servers have embedded NICs, they may draw power even if they are disabled. If you intend to use multiple NICs for redundancy or throughput, a careful comparison of internal or OEM cards can save several watts per card.

Other motherboard components: Supporting chipsets
In order to form a complete system, each server requires its own supporting chipsets. It is beyond the scope of this article to try to compare the variety of systems on the market. This is where the vendors can each tout their claims that its server is the most energy-efficient system on the market. If the system motherboard is already equipped with the majority of onboard NICs, RAID controller or other I/O devices to meet your requirements, you may not need to add additional cards.

Each major manufacturer seems to have a power estimating tool for its servers. It is not meant to be an absolute indicator of the actual power that the server will draw, but it will provide a good estimate and a way to compare different components and configurations.

Improving server efficiency: The bottom line
The chart below is a hypothetical comparison of two servers. As you can see, Server A uses lower-efficiency or older components. Server-B uses the latest, most efficient components.

Component< Watts used: Server A Watts used: Server B Watts saved Percent saved
Fans 75 50 25 33%
CPU 100 80 20 20%
Memory (16 GB) 81 44 37 46%
Hard drives (6) 60 40 20 33%
I/O cards 30 20 10 33%
Motherboard 30 20 10 33%
Total DC power 376 254 122 32%
Power supply 125 41 84 67%
AC input power 501 295 206 41%

Power supply

Server AC input (watts) Efficiency DC output (watts) Losses (watts)
A 501 75% 376 125
B 295 86% 254 41

Carefully comparing and selecting more efficient components and configuration options can potentially result in a 41% power saving, or over 200 W. In a data center with a PUE of 2.0, each server can save up to 400 W.

All these factors help to determine how much power your data center consumes. Carefully specifying and configuring your servers to meet but not exceed your computing requirements can add up to a savings of $2 per year for each watt you conserve.

Put another way, each watt per server that that is saved represents over 50 kWh, based on a three-year service life of each server (8,760 x 3 = 26,280 @ PUE of 2.0 = 52.560 kWh). In the above example, if 200 W is saved per server, it results in an energy saving of more than 10 Megawatt hours over a three-year period.

You can achieve a 10% to 20% difference in actual server power used in your production environment, which can save thousands of dollars per year in energy costs. Or it could mean the difference between needing to upgrade your data center power and cooling or server room or continuing to operate with the existing capacity of your infrastructure.

The last recommendation, and perhaps the most simple and effective method to save energy, is to review the status and purpose of every IT device in your data center. Many studies have shown that there are a significant number of servers and other IT devices that are no longer in production but are still powered up. No one seems to know which applications or functions they support, but no one wants the responsibility of switching them off. Take a total device inventory regularly -- you may find several servers, routers and switches that are unused and powered up. Once you find them, just turn them off.

ABOUT THE AUTHOR: Julius Neudorfer has been CTO and a founding principal of NAAT since its inception in 1987. He has designed and managed communications and data systems projects for both commercial clients and government customers.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at [email protected].

Dig Deeper on Data center design and facilities