Covering Scientific & Technical AI | Sunday, December 1, 2024

Saving Money at 40 Degrees C. 

<p><span style="color: #000000;">Green Grid board members John Pflueger and Timothy Dueck give their perspective on running data centers at warmer temperatures. If green isn't your motivation, think about soaring energy costs and ROI.</span></p>

In order to get a little more perspective on the Green Grid Association white paper, “Data Center Efficiency and IT Equipment Reliability at Wider Operating Temperature and Humidity Ranges,” I spoke with Green Grid board member John Pflueger, whose day job is Principal Environmental Strategist at Dell Computer, and Timothy Dueck, Chair of the Measurement and Metrics Work Group at Green Grid, and Global Solution Architect at Dell.

See Also How Cool Is Your Data Center?

GCR: From my perspective, it seems like everybody wants to run their data centers warmer these days. How much reluctance do you see from data center operators to make this change?

Pflueger: You're going to find a mixed audience—some folks who readily adopt it, some who are very conservative. The data center crowd tends to be very risk-averse, because reliability is such a key issue. The industry very much likes to see existence proofs. But we do see a number of companies and individuals with statistics on how their PUE has declined [by running warmer.]

Was this study done to encourage data center operators to trust the new ASHRAE ratings?

Dueck: All the work the [equipment] industry has done to expand the environmental envelope is not intended to drive anybody in that direction, other than to be an enabler. There are tradeoffs with higher temperatures. But they're not the tradeoffs that we used to think. The increase in the failure rate used to be thought of as fairly significant for each 10 degrees C increase in temperature. Now we can see that it's more possible to increase temperatures [without excessive failures.] There's a group of data center operators out there who have gotten a huge ROI from a drop in energy use at warmer temperatures.

Is the justification for moving to warmer temperatures in data centers just to lower energy costs?

Pflueger: Cost is a big piece of it. There are some companies that believe there is benefit in being more efficient in general—in particular, in being a better steward of the energy resources they have been entrusted with. But it's mostly ROI.

And it's not just cutting operating expenses. There are capital cost savings as well. You can build a slightly simpler data center that's less costly to construct. Not everybody is able to take advantage of this, to take advantage of air side economization, or the ability to design a data center that does not need a chiller. But as we learn more, as the enabling technology comes online, more and more people will find themselves with that possibility.

Dueck: Also, keep in mind that as technology becomes smaller and more dense, it uses a smaller footprint, so the energy costs become a bigger part of the total cost of ownership. Energy costs used to be equivalent to ten to fifteen percent of the total cost of the server. Now it's approaching one-third to one-half. Soon it will be 100 percent of the cost—the energy it uses will cost as much as the server itself.

Before you could put one two-socket server in a 2U space, so 400 watts in a 2U space. Now you can have a 4U chassis with 80 sockets or 40 sockets, each at 4KW. The cost of a rack and the physical square footage of the datacenter is fairly static. Now you might have 10 times the number of computer nodes in a given square footage, with major HVAC equipment. Power usage can be orders of magnitude higher.

Pflueger: Plus, energy consumption is often a proxy for other costs. There are a good number of financial factors that are at least partially related to energy consumption. There's the direct cost of more energy. Every watt delivered to the server turns into heat. To supply that wattage you need power distribution equipment—so there's a capital cost for both the power and the cooling. You have to architect the space, and that requires intellectual time. It's a large waterfall of costs.

The ASHRAE data does show that server life is shortened at higher temperatures. But you believe the tradeoff in lower energy cost is worth it?

Pflueger: Look at the issue of the reliability of a server at 400C, where you're cooling the data center by bringing in air from the outside. The ASHRAE data shows the server life running continuously at 40 degrees C. But you're really only hitting 40 degrees C for a few hours per year, depending on where you're located. Most of the time you might be running at 25 degrees C. In order to really understand the effects of making that decision to run up to 40 C, you have to realize that it's a combination of how many hours you're running at 20 degrees, how many at 25, how many at 40. That's a challenging concept to try to communicate.

Has anybody tried to quantify how reliable the servers are at varying temperatures?

Dueck: Yes. When I was at Intel the company did a thorough study in Rio Rancho [New Mexico]. There were two data center modules, one cooled at the traditional 20 to 25 C, and the corresponding one next to it with fresh air and no humidity control. But it did provide cooling above 90 degrees F (32 C) and heating below 35 F (1.7 C). There was not much difference.

NOTE: The 2008 Intel paper, “Reducing Data Center Cost with an Air Economizer,” can be found here.

SUMMARY OF THE RESULTS:

The temperature of the economizer compartment varied from 64 degrees F [17.8 C] to about 92 degrees F [33 C] during the 10-month test period. Humidity varied from 4 percent to more than 90 percent “and changed rapidly at times.” The servers and the interior of the economizer compartment “became covered in a layer of dust.”

When using the economizer, energy consumption dropped by 74%. Based on these results, Intel calculated that total power usage for cooling per year would drop by about 67% – an annual savings of about $143,000 for a 500-KW data center and $2.87 million for a 10-MW data center. A 10-MW data center would also save up to 76 million gallons of water.

After 10 months, the failure rate in the economizer compartment was 4.46%, compared with 3.83% in Intel's main data center over the same period. However, it's important to note that the failure rate in the compartment with DX cooling at the test site was only 2.45%.

- Richard L. Brandt

AIwire