Chris Loeffler, data center applications manager, distributed power solutions at Eaton Corporation (, says:

The server hardware that most cloud infrastructures use to host virtual machines is bigger and more robust than a typical single-function server. It’s also far more heavily utilized: While the average non-virtualized server operates at perhaps 5 to 15 percent of processing capacity, the average virtualization host server may be as much as 80 percent utilized at any given time. For both reasons, the virtualization host servers in most cloud data centers demand more power than conventional servers, and put greater strain on power distribution units (PDUs), panel boards and uninterruptible power systems (UPSs). This extra strain poses unique power, cooling and availability challenges to the IT infrastructure. To meet them, businesses must collect the benefits of cloud computing without compromising uptime or overwhelming their power and cooling systems. The following will explore strategies for powering and cooling cloud-based infrastructures.

Use modular power and cooling system components.
Using modular power system components lets you add capacity quickly and incrementally as your needs increase. For example, a modular scalable UPS for a small cloud environment may provide up to 50 or 60 kW of capacity in 12 kW building blocks that fit in standard equipment racks. As your requirements increase, IT personnel can simply plug in another 12 kW unit, growing capacity (in this example) from as little as 12 kW up to 60 kW N+1. That’s a scalable and efficient approach to keeping up with escalating power needs that’s far more economical than purchasing surplus capacity in advance. Moreover, rack- based modular power system components tend to be compact and easy to install, making them an ideal fit for fast-paced cloud data centers, in which technicians are constantly moving, changing and adding infrastructure resources.

Deploy a passive cooling system
Today, most organizations dissipate data center heat by placing computer room air conditioning (CRAC) units around the periphery of their server floor. Many companies also use “hot aisle-cold aisle” hardware configurations, in which only hot air exhausts or cool air intakes face each other in a given row of server racks. That produces convection currents that generate a cooling, continuous air flow. However, while technologies such as these are usually more than sufficient for traditional data centers, they are often incapable of coping with the searing heat produced by cloud infrastructures. Thus, public and private cloud environments typically require newer and more robust cooling technologies.

Companies looking for even lower upfront costs and higher operating efficiencies can install passive cooling systems. These employ enclosures equipped with a sealed rear door and a chimney, which captures hot exhaust air from servers and vents it directly back into the return air ducts on CRAC units. The CRAC units then chill the exhaust air and re-circulate it. Passive systems typically require a strong air flow “seal” from the front of the cabinet to the rear so that only minimal hot server exhaust air mixes with incoming cool air from the CRAC units. By segregating hot air from cool air more thoroughly than ordinary hot aisle-cold aisle techniques, a properly-designed passive cooling system can cost-effectively keep even a blazingly hot 30 kW server rack running at safe temperatures.

Construct multiple facility rooms
Large data centers like those that supply public cloud services often house UPS equipment in a dedicated facility room adjacent to the server floor. Setting up two facility rooms, one for UPS and power system electrical components and the other for UPS batteries, can be an even more efficient arrangement. While UPS electronics can typically operate safely at 35°C/95 F, UPS batteries must usually be kept at 25 C/77 F.

Cloud infrastructures make extensive use of virtualization and higher powered servers including blade servers, which dramatically increase rack-level power and cooling requirements. Moreover, cloud data centers tend to be dynamic environments in which virtualized workloads migrate freely among physical hosts. That increases IT agility but can also result in blown circuits and other electrical problems that lead to service outages. To master these challenges, organizations should adopt technologies and techniques that increase the reliability and redundancy of their physical and virtual environments, including power and cooling systems. Together, such tools and strategies can help any company enjoy the power of cloud computing reliably and cost-effectively.