Three questions on ASHRAE standards for data centers
Evolving ASHRAE standards for temperature and humidity allow data center owners to save money on power and cooling demands.
The American Society of Heating, Refrigerating and Air-Conditioning Engineers is involved with the development of modern data center facilities. The ASHRAE Technical Committee 9.9 created an initial set of thermal guidelines for the data center in 2004, followed by a second edition in 2008 and a third in 2011. Each new iteration of the guidelines outlined classes of servers and other data center equipment that could function in higher acceptable levels of operating temperature and humidity.
When the standards are properly implemented through careful facility design and systems deployment, a data center can reduce its reliance on traditional mechanical cooling methodologies to save on cooling costs and extend the reliability of its data center hardware. Let's consider some questions regarding ASHRAE guidelines.
Q. Why should I be concerned with ASHRAE standards? With technologies like virtualization to lower my server count, I can just keep the data center cold or create hot aisle/cold aisle containment, right?
Yes, ASHRAE suggestions are just that -- recommendations and guidelines -- and the IT industry is not bound to follow any of these guidelines. But consider some of the factors that are influencing the IT industry.
More details for ASHRAE guidelines
Data center design tips for ASHRAE TC 9.9
ASHRAE 2009 conference coverage
Evolution of ASHRAE standards
First, mechanical refrigeration is a well-understood and established technology, but it is also very expensive. Most businesses spend millions of dollars on HVAC systems capable of cooling a data center facility. HVAC also requires regular maintenance, which can incur unwanted downtime, and when cooling malfunctions occur, there are rarely any backup HVAC systems online.
Second, energy costs keep rising, making mechanical refrigeration more costly for the business without helping the business do "real" computing work. If a modern business is going to spend more money on electricity, the emphasis is on powering the servers, not the air conditioners.
Third, the traditional paradigm of mechanical cooling is fading as trends like server consolidation and high-density blade systems concentrate more computing in less space and possibly create hot spots that HVAC or "cold rooms" cannot address adequately.
ASHRAE guidelines provide businesses with a roadmap that allows their data centers to operate suitably designed equipment at elevated temperature and humidity levels. If the server doesn't need as much cooling, HVAC bills can be reduced and alternative forms of practical, ambient cooling -- air-side or water-side economizers -- can be introduced to the environment.
Q. How is ASHRAE classifying equipment for elevated operating conditions? How do these classifications affect my current equipment?
ASHRAE defines four classes of data center equipment; A1, A2, A3 and A4. Here are the essential points:
- A1 equipment operates from 15 to 32 degrees Celsius at 20% to 80% relative humidity
- A2 equipment operates from 10 to 35 degrees Celsius at 20% to 80% relative humidity
- A3 equipment operates from 5 to 40 degrees Celsius at 8% to 85% relative humidity
- A4 equipment operates from 5 to 45 degrees Celsius at 8% to 90% relative humidity
Most current data center equipment falls into class A1 or A2, though a growing number of server manufacturers such as Dell are introducing equipment suited for class A3 or A4 operation. It is likely that more manufacturers will support a broader range of class A3 and A4 equipment as suitable data center designs appear.
Studies have shown properly designed servers do not experience statistically higher failure rates when operating at higher temperatures, so reliability is not impaired. But it's important to note that ASHRAE guidelines are not backward compatible to older equipment. That is, a newer class A3 or A4 server will function fine in lower class environments, but older class A1 equipment will almost certainly not function reliably in a higher class environment. In addition, ASHRAE allows equipment to exceed those points for the short term, allowing for more ride-through in the event of cooling problems.
It is unlikely that any business will invest in a new fleet of higher-classification servers simply to take advantage of higher operating ranges. Other factors such as the physical layout of racks and equipment and the availability of ambient cooling alternatives will dictate continued reliance on mechanical cooling. The transition to higher temperature and humidity levels is more likely to occur in phases over several technology refresh cycles and corresponding building renovations or during a "greenfield" construction project.
Q. What alternative cooling technologies do ASHRAE standards dictate? Do I need to forgo mechanical cooling entirely?
Several practical and effective cooling technologies currently exist, and ASHRAE standards do not dictate the use of any particular cooling technology. For example, air-side economizers can be quite effective in climates where cooler air is available at least part of the day during the year. Similarly, water-side economizers can be excellent alternatives when a source of cold water is available from nearby rivers, lakes or other bodies of water.
It is not necessary to abandon the use of mechanical cooling when alternative cooling methodologies are adopted. In fact, many existing data centers in sub-optimal locations may retrofit a data center facility to use alternative cooling as conditions permit and then supplement cooling with the existing HVAC unit on a limited basis -- sometimes significantly reducing reliance on mechanical cooling (and its costs) without abandoning it entirely.
The IT industry must eventually move away from traditional mechanical cooling and embrace more cost-effective and environmentally friendly alternatives to cool servers and other IT systems. ASHRAE standards such as its 2011 Thermal Guidelines offer one approach that helps organizations manage the transition to more efficient cooling approaches without breaking the IT budget or discarding the established equipment or cooling infrastructure.