Data Megaliths
We are in a new age of mega building. Companies are building both corporate campuses and datacenters at record rates. Hardly a day goes by without an announcement of a new big building. Apart from the cost of such structures, there is also a large release of greenhouse gas into the environment during their building. While only shareholders can quell the campus mania, there are practical solutions for drastically reducing the footprint and cost of the datacenter.
The motivation for building such huge structures may be a form of monumentalism in the same class Egypt's pyramids, tombs to celebrate their chief executive's greatness (or aliens' energy concentrators according to some hopefuls). Or, more charitably they may be equivalent to Great Wall of China, built to keep out the barbarian hordes. As an article in the San Jose Mercury News, "Will Apple, Google, Facebook and Amazon fall victim to the 'campus curse'?" points out these companies are very cash-rich and can afford these follies. Further, the authors express the concern that, based upon historical precedent, these same companies may be at their zenith and on their way down.
Insofar as the campuses are concerned, I am not wise enough to make a value judgment. Personally though, entering such a huge structure each morning with 10,000 others to work in a cube would be a big turnoff – just like a battery chicken and as anonymous. The only compensation would be the exercise taken on the long walk from your car in the mega car park. It is noteworthy that giants such as Intel and Cisco have very modest head offices and have built incrementally as needed. Cash goes into growing their businesses or is returned to shareholders.
There is not much we can do about corporate office vanity plays, but datacenters are another matter. Using modern technology, they can be much smaller and considerably more environmentally friendly.
The builders of these new "free air" datacenters claim that they can cut energy consumption considerably by locating in a cooler environment, using a combination of dry and adiabatic cooling and eliminating the need for chillers. The claim a PUE (Power Use Effectiveness) of less than 1.1.
Even though they still count server fan energy consumption as part of the IT load, in their PUE calculations they have addressed the server fan consumption issue (even though not eliminating them). Even so 0.05 to 0.1 should be added to PUE to account for server fans.
By doubling the fan size the power needed to move the same volume of air is decreased to 1/16th but this also decreases the static pressure by a corresponding amount. This means that the heatsink size must also be increased to decrease its resistance to the passage of air. Consequently, this drives the size of the enclosure to 2U or more, up from the more usual 1U.
While it is physically possible to load a rack with 80 half width servers in 40 1U enclosures, the 30-40kW cooling demand is very difficult to satisfy in any air cooled data center. The "free air" cooled community goes the other way, reducing the rack cooling requirement to as little as 6KW, according to Facebook's Prineville publicity.
Further, in continued pursuit of decreased air resistance, rack spacing has been increased. Rather than the close to best case of 25 square feet per rack, 80 square feet is close to the new norm. Consequently datacenters have to be very large.
For example, Facebook's publicity clams that one of their Prineville datacenters has 160,000 sq ft of floor area and has a 15MW power supply. Factoring in overhead and safety margin, we can assume 12MW IT load, thus a power density of 75W per square foot. Further, the facility has 2,000 racks, that is, one rack per 80 square feet. The process of constructing a building of such a size will cause the release a huge amount is the amount of carbon dioxide, about 21,000 tons.
The ground floor alone, assuming a 6 inch concrete base, will require 5,800 tons of concrete. Making a ton of concrete releases 1.25 tons of carbon dioxide into the air, so just pouring the floor causes the release of 7,250 tons of greenhouse gas. And by the time you've added in the second storey floor and miscellaneous walls, pads etc. your total carbon dioxide release is around 13,000 tons.
Steel is another big contributor. Foundries release on average 1.8 tons of carbon dioxide for every ton of steel they make. Such a building will contain roughly 4,500 tons which will cause the release of a further 8,000 tons of carbon dioxide.
The sum total of 21,000 tons dwarfs that from all the other emission sources. Combined, aluminum cladding, sheetrock and glass would add less than 200 Tons of carbon dioxide. Even the loss of carbon dioxide sequester is small, less that 20 tons assuming an area of forest of three times the datacenter floor is cut down. The construction cost of such a building is around $200M, about $700 per square foot. In addition to the construction of the shell, a considerable expense is incurred in power and network cabling runs along buildings that are around 1,000 feet long.
In addition, there may be other issues caused by rapidly changing weather conditions. In one case condensation was observed on key components and silver sulphide induced shorts may be another hazard.
Using liquid overcomes the huge disadvantages of air, namely low specific heat, low density, compressability, high interface thermal resistance.
Liquid is brought either directly into contact with the object to be cooled or indirectly, via a cold plate. The path between the liquid and object has a very low thermal resistance and all liquids used have a specific heat several orders more than air. Consequently, heat can be captured easily and transported with little energy to a point of disposal. This could be a dry cooler, heat exchanger, cooling tower etc. No chillers are required virtually anywhere in in the world.
These systems generally require little rack space so that the rack power limitation becomes a function of the number of servers.
Using two phase liquid cooling with a super thin cold plate, rack power densities of 100KW+ with a PUE of 1.07 have been demonstrated at the SLAC National Accelerator Laboratory.
There are at least 14 companies with liquid cooling options. Most are water-based and can achieve power densities in excess of 50KW per rack but are expensive to make robust so that servers do not get damaged by leaking water. Three or four immerse servers in a dielectric fluid but have not yet achieved densities greater than 20KW per rack. Only two use phase change technology.
This has major implications for datacenters. If a rack can power and cool a 50KW load and those racks can be allocated floor space at 25 sq feet per rack, the 12MW IT load datacenter shrinks from 2,000 to 240 racks in a 6,000 sq foot building, down from 160,000 square feet.
As such servers have no air circulating through them have no fans they are more efficient, more reliable and completely silent. They can be deployed anywhere. In many cases, they can be put in existing space, potentially eliminating the release of tens of thousands of tons of carbon dioxide.
On the business side too, there are great advantages. Major outlays for bricks and mortar are eliminated, lead times drastically reduced and depreciation for tax purposes much more rapid. According to my estimates, infrastructure acquisition costs can be cut by around 50% and total cost of ownership 30% to 50%.
In spite of these compelling agreements for change, most datacenter "experts" will still insist that air is the superior method of cooling, and liquid cooling is only for high performance computing. They do have a small point. The total industry is built around air and change will bring great disruption and with it great opportunity.
The question is which country will first adopt the flexible, low cost infrastructure enabled by liquid cooling to gain advantage for its industries, USA, Europe, Japan, China?
About the Author
Phillip Hughes is CEO and founder of Clustered Systems. He conceived the revolutionary liquid cooling architecture for commodity servers. This combines all the advantages of liquid cooling with low adoption cost and simple operation. He drove demand with end customers (e-business, financial, manufacturing), developed marketing channels and set up subcontract manufacturing infrastructure. The first product, a rack for cooling 1U servers is distributed worldwide by Liebert, Inc., a major cooling systems OEM. He and his team have recently completed the installation of the next generation computing system, a 100kW rack with a PUE of 1.07.