Challenges of Cooling a Data Center
If you’ve ever walked into a data center, you’ve probably noticed that the air feels significantly cooler than it does outside the server area. Depending on where you’re coming from, that cool air might feel pleasant — or you could start shivering almost immediately.
Maintaining the optimum temperature in a data center has long been a struggle for designers and center managers. Allowing the air temperature to get too hot increases the chances of heat-induced equipment failure — literally fried equipment. Keeping the temperature too cold, on the other hand, especially when the surrounding environment has relatively high humidity, can lead to condensation and salt buildup on sensitive circuits.
Depending on who you ask, the ideal temperature for a data center ranges from 66 degrees to 77 degrees, with some claiming that lower temperatures are better while others note that there is nothing wrong with setting the thermostat a little higher. While IT folks may not be able to agree on the exact ideal temperature, they can agree on one thing: maintaining something even resembling a “perfect” temperature can be a struggle.
The Basics of Data Center Cooling
Running servers, like most electronics, give off heat as they operate. The hot air from the servers, unless it is appropriately redirected, circulates into the air of the server room, raising the ambient temperature. The more servers in the space, the more hot air they produce, and the higher the overall temperature. This means the servers must work harder to run, and can quickly overheat.
Data center designers, therefore, have spent years looking at the design of their facilities to come up with ways to keep the machines as cool as possible. One of the most common solutions was to lower the overall temperature in the center, which accounts for the hot air exhaust being released into the atmosphere. Often, this is addressed by physically separating hot and cold aisles, or by installing airflow management systems that redirect the hot air away from the cold air intake, ensuring a stable and consistent cool temperature.
The problem is that these airflow management and containment systems don’t always work, creating what’s commonly referred to as a “hot spot and meat locker” effect. Essentially, when this happens, as you walk through the data center, you’ll notice areas where the temperature fluctuates noticeably. In one spot, the temperature will be warm to hot, while the rest of the surrounding area is overcooled to the point of feeling like a freezer or meat locker in comparison.
Problems With Overcooling
While overheating a data center certainly creates the potential for damage, so does overcooling, as demonstrated by the aforementioned humidity issue. Overcooling the data center also has another practical drawback: Cost. One of the primary reasons that customers migrate to the data center environment is to contain costs, since the expense of keeping an onsite server room at the appropriate temperature is often more than a small- or mid-size business can bear. Even large corporations deal with excess energy expenses because of overcooling; companies like Google and Facebook, which operate massive server farms, spend millions of dollars each year in energy costs to keep their data centers cool.
Some data centers are attempting to overcome the problem of energy consumption by turning to green construction methods, and, in what seems like an obvious move, turning up the thermostat. Studies show that in most cases, increasing the ambient temperature in server aisles has no discernable effect on their functioning while significantly reducing energy usage. Case in point? Microsoft raised the temperature in its California data centers by two to four degrees, and saved more than $250,000 in energy costs.
Other Options
While turning up the heat a little bit can make a difference in energy costs, it doesn’t do much to solve the airflow and temperature regulation problems that plague many data centers. Much of that is attributable to the simple fact that each cabinet in the data center may have different density requirements, meaning that some will give off more heat than others (creating the hot spots), and more significantly impact the overall temperature. Other contributing factors include the design of the center itself, how well the flow of air is controlled, and even the location of the data center.
Some data centers are turning to more customizable solutions to manage cooling, developing systems that are capable of sensing temperature fluctuations on a cabinet-by-cabinet basis and adjusting accordingly. Other centers are looking for ways to redesign the airflow, and installing more efficient fittings to better redirect hot air. Given that the data center environment is always changing, and demand for green, cost-effective solutions is rising, the future undoubtedly holds many advances in the realm of temperature and airflow management.