Obrien

Kevin O’Brien, President, Mission Critical Construction Services, EEC

My experience in the data center industry began nearly 30 years ago when I became the Facilities Manager for a large New York financial services company.  Throughout the years, it has been incredible to witness the constant evolution of the data center and the industry as a whole firsthand, as well as to learn and grow as a professional alongside of it.

The 1980s and 1990s

Back in the ‘80s, it was fairly common for data center facilities and trading / office space to be located within the same building.  However, the financial services company I worked for began constructing our very first remote site, mission-critical facility outside of NYC in 1988.  What made this facility unique was that it was completely dedicated to data and telecommunications, located on what was previously an ITT communication HUB in New Jersey that functioned as housing for the “Hot Line” link between Washington, DC and Moscow.  Back then, most functions were analog, so this remote site provided us with the ability to increase the redundancy and reliability of the facility’s electrical and mechanical systems.

Although the Tier Certification System had not yet been conceived in the ‘80s and ‘90s, our unique practices we were able to achieve what is now known as a Tier II standard on our electrical systems, along with 2N equivalent status on our Uninterruptible Power Supply (UPS). Back then, load-in data centers had a maximum range of only 35-50 Watts per square foot, so to meet the growing demand for fiber and reliable computers, companies turned to remote sites as a favorable option during the 1990s. Through this shift in typical processes, the 7×24 Exchange began publishing articles that shared how data center owners and managers could improve the overall reliability of mission-critical facilities – a practice that eventually resulted in the development of “Tier Certifications” through the Uptime Institute.

The Dotcom Era

As the industry continued its evolution, the next major paradigm shift took form as the dotcom boom.  Able to build anywhere in the world thanks to an economic upswing and high proliferation of global fiber, many companies began constructing facilities larger than 100,000 square feet filled with racks using roughly 50-75 Watts per square foot during full capacity.  Unfortunately, this construction came to a screeching halt after the events of September 11, 2001, when the stock market plummeted.  It would take a few years, but demand for servers and data center space would slowly return.

After the dotcom collapse, Sarbanes-Oxley (SOX) developed a law in 2002 requiring trade-supporting data centers to be located within so many fiber miles of Wall Street, as well as construct a separate, synchronous data center for redundancy.  In order to remain in compliance, many of these additional facilities were constructed throughout New Jersey, causing a rise in construction throughout the state.  By this time, data centers were reaching 100 Watts per square foot and many achieved Tier III and IV status. Square footage and costs continued to rise as the need for a more robust and supportive infrastructure grew.

Demand for Density and Redundancy

Along with the growing need for density and high levels of redundancy, buildings’ square foot to raised floor ratio was also changing.  For example, 100,000 square feet of raised floor area (commonly referred to as white space) at 100 Watts per square foot in a Tier III configuration would produce a ratio of 1-to-1. If at this point the density of the technical space increased to 150 Watts per square foot, then the ratio would also increase to 1-to-1.5; or in simpler terms, one would need 150,000 square feet of space to support the infrastructure with the same 100,000 square feet of raised floor.  This type of infrastructure was developed to meet IT load support needs – needs that never fully materialized.  As density increased, the industry standard shifted to kilowatt (kW) per rack instead of watts per square foot for a more accurate measurement.

More Power / More Efficiency

As industry leaders began to rate kW or cost for kW, it became easier to understand precisely how much power was being used and at what cost to the operator.  To keep this standardized, the Power Utilization Effectiveness (PUE) system was developed, a universal measuring system for the efficiency or inefficiency of energy consumption throughout data center.  Now that there were standardized ways to keep track of energy usage, data center managers were held highly accountable for energy consumption, making it more desirable to seek free and/or highly efficient means of facility cooling, even though densities were still continuing to increase greatly.  In response to this need, many companies – including Yahoo! – completely eradicated the need for mechanical cooling (chillers), turning instead to “free cooling” through the utilization of fresh outside air.  Unfortunately, while this worked very well for large facilities like Yahoo!, it was not practical for most enterprise data centers and so, “hot aisle, cold aisle” became the norm.  This practice allowed for the isolation of loads, producing higher levels of energy efficiency and serving as a catalyst for the trend of elevating temperature inside the data hall.

Growing popularity of “green” energy and higher levels of efficiency led to more innovative use of technologies such as high-efficiency chillers, adiabatic cooling, fuel cells, solar power and 380V Direct Current (DC) within data centers.  If the industry continues on this trend, I expect to see the total elimination of Alternating Current (AC)/DC conversion and mechanical compressors in favor of a more distributed generation of clean energy.

After more than 26 years watching each of these changes occur, I’ve learned that rapid adaptation, evolution and innovation are critical to reaching the highest level of efficiency and success within the data center industry.  From Internet and cloud data centers, to big open box spaces, to modularized data halls and pods – change is inevitable and I’m excited to see what the future will hold!