Herb Zien - LiquidCool SolutionsBy Herb Zien CEO of LiquidCool Solutions

Data centers have taken their place as the backbone of the U.S. economy. In supporting transformative technological advancements, they are among the nation’s largest and fastest-growing consumers of electricity on track to consume roughly 70 billion kilowatt-hours of electricity annually.

As server densities increase, so have cooling demands. Designers and engineers have been working for decades on data center designs to manage heat rejection, and in a few cases, heat recovery.

For reasons that defy logic, the predominant data center design today is circulating conditioned air through the data processing room and racks. Separate hot and cold aisles are maintained to conserve energy. Cold air is forced up through holes in the floor and humidity controls are needed to avoid condensation on IT equipment if too high, or electrostatic discharge if too low. Using this method, data centers take raw electric power and expel more than 98 percent of the electricity as low-grade heat energy.

In line with current ASHRAE standards, data center temperatures are being elevated to reduce the need for mechanical refrigeration, but this causes fans to work harder, which offsets much of the savings. Free cooling is used where possible, but introducing large volumes of dirty, outside air creates maintenance problems. Evaporative cooling, an alternative to mechanical refrigeration in some climates, uses a lot of water and the saturated air often has to be reheated to bring humidity down to acceptable levels.

Although circulating air can remove low-grade heat, it does not do well with point sources. To accommodate this limitation hundreds of low-power racks are used in large cloud data centers, requiring five times more space than would be needed for more power-dense racks.

The fact is, air cooling is ridiculous!

Air is a thermal insulator with an extremely low heat capacity and virtually no thermal mass. Cold air sinks. Open spaces in the racks allow short circuiting between the cold and hot aisles. Contact between air and electronics promotes oxidation and corrosion. Pollutants in the air can cause additional damage. Fans are inefficient and can fail which affects reliability, and they create excessive noise which means earplugs must be worn.

With heat generation at the device level bumping up against the thermo­dynamic limit, designers and engineers are tasked with developing innovative and cost-effective specifications for taking data centers into the future. Yet, data center operators continue to tinker with fan cooling, the latest novelty being fan walls. Some large facilities use a combination of airside economization and evaporative cooling while using massive walls of fans to push the cool air onto the data center floor.

Regardless of how they’re manipulated, fans waste energy, take up space and create pollution in the data center and at the power plant.

Because of the inherent flaws in current data center design, liquid cooling is beginning to gain market traction and, interestingly, accommodating high power densities is among the least important benefits of liquid cooling. Rather, it is the elimination of fans.

With liquid cooling, data centers will look completely different than they do today. There will be no need for high ceilings or raised floors, or chiller rooms or CRAC units or hot aisles. In isolating electronics from the environment, there will be no need for outside air or humidity control except for employee comfort. The white space would be 70 percent smaller and power demand 40 percent lower than a conventional air-cooled data center. Racks would be denser and fewer, and capital and operating costs for infrastructure would be significantly lower, as well as a far reduced total spend on racks and servers.

One technology is especially cost effective, the Liquid Submerged Server (LSS). The LSS is a proprietary total immersion technology where all electronic components are mounted in a sealed server chassis and in direct contact with a dielectric liquid coolant. This is a scalable arrangement that decouples electronics from the environment. The intimate dielectric fluid contact with electronics serves to maintain lower CPU and memory temperatures, and eliminates parasitic cooling energy used by fans. This technology is more effective at dissipating heat from electronic equipment than air-cooling or alternative liquid cooling systems, and is able to reuse almost all of the rejected energy as a building or water heating source.

Validation

The U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) recently analyzed and tested LSS technology for its ability to reduce the energy impact of data centers in commercial buildings and its reuse of rejected energy as a heating source.

NREL’s key findings:

  • Heat Recovery Efficiency: Throughout testing, the LSS system recovered between 90 and 95 percent of the heat energy from the servers. This was achieved irrespective of the ambient air temperature surrounding the server and without insulation on the servers themselves. According to NREL, heat recovery efficiency is expected to improve further as more servers are added to the configuration.
  • Ability to Heat Facility Water to a Useful Temperature: The LSS system was set up to heat NREL’s facility water from to 120°F, a temperature hot enough to be useful for building and hot water heating. This was achieved while maintaining all server electronics (CPUs, memory, etc.) well within normal operating temperatures, even under the most stressful workloads. It was also observed that the LSS systems should be capable of heating facility water to temperatures as high as 140°F while maintaining component temperatures within operating limits.
  • Ease-of-Use and Reliable Operation Confirmed: Through six months of testing the LSS system performed reliably with no issues that impacted the testing.

The results of the NREL tests also provide independent third-party validation that cooling power is between one and two percent of the IT equipment power.

NREL has developed a white paper outlining its Phase One results for the LSS testing.

In Phase Two testing, currently underway, the LSS system will be running operational workloads while heating reuse water to 120°F at DOE’s Energy Systems Integration Facility (ESIF) data center also located on the NREL campus. Phase Two will be completed in Fall 2016.

LiquidCool Solutions’ LSS technology factors in all the elements data center designers should consider: capital cost, operating cost, real estate and building height, energy consumption and floor space, water requirements, maintenance, server reliability, noise and energy reuse.

About the Author

Herb Zien is the CEO of LiquidCool Solutions. Mr. Zien has over 30 years of experience in project development, engineering management, power generation and energy conservation. In addition to leading LiquidCool Solutions, he is Executive VP of Source IT Energy, LLC. Previously he was cofounder of ThermalSource, LLC. Mr. Zien received a Bachelor of Science degree in Mechanical Engineering and a Master of Science degree in Thermal Engineering from Cornell University, as well as a Master of Science degree in Management from the Massachusetts Institute of Technology.