David Schirmacher, chief strategy officer for FieldView Solutions (www.fieldviewsolutions.com), says:

While managing a data center operation has always been a complex proposition, over the past several years we have seen a real shift in the operational dynamic. Unprecedented growth in data processing requirements and evolving IT hardware technologies along with updated environmental criteria are forcing organizations to rethink their data center strategy. Rapidly increasing capital, operational, and energy expense are driving the industry to create the metrics necessary to quantify, and ultimately drive performance.

Industry organizations such as the 7×24 Exchange, ASHRAE, The Green Grid, and the Uptime Institute continue to both enhance and expand the metrics necessary to measure performance. Metrics such as PUE, DCIE, CUE, WUE, RCI, etc. are becoming part of the vernacular. ASHRAE is about to release the 3rd revision of their TC9.9 data center environmental standard. This revision broadens the permitted environmental criteria allowing operators more flexibility in configuring their mechanical systems to achieve increased energy efficiency, and in some instances, increased capacity. It will also address the increasingly common use of 100% air side economization. Inadequate environmental monitoring will make it difficult to fully measure performance and validate the effectiveness of implementing many of the recently defined best practices.

The government is also very engaged. In recent years, both the EPA and DOE have been hard at work creating the means to measure and quantifying data center performance. The EPA has added a data center category to its very successful Energy Star Portfolio Manager and the DOE has introduced the DCPro software tool to assist data center operators in quantifying performance. To date, these efforts have focused around a “carrot” approach designed to equip operators with the tools to define performance and encourage continuous improvement. It doesn’t take much of a leap to predict where the industry is heading. With the increased focus on efficiency and environmental responsibility the road to a regulation is unlikely to be very long. If you’re a global operator then you are already dealing with regulation. For example, the UK’s Carbon Reduction Commitment imposes steep penalties on operators of inefficient data centers.

The good news is that implementing a well designed environmental monitoring program can be one of the best investments you can make. Not only can it help data center operators significantly reduce energy expenses, it can assist in deploying their IT more effectively. Data center operators are finding that the right monitoring tool can help identify and release “stranded” capacity i.e., unusable capacity due to unbalanced deployment of IT hardware. Reduced operational Service Level Agreements (SLA) and greatly lowered risk of creating unintended hot spots are other key benefits.

Where should Environmental Monitoring rank in terms of overall priority in the data center?

If you adhere to the adage “you can’t manage what you don’t measure” then implementing a comprehensive monitoring program should be high on your list of priorities. This is particularly true in the complex environment of the data center where even small performance discrepancies can result in major operational inefficiencies that send energy expenses skyrocketing unnecessarily. In addition, you could well compromise equipment lifecycles and worse, increase the probability of an unplanned outage.

The biggest challenges for data center and IT managers when it comes to Environmental Monitoring.

Lack of meaningful data is probably the single biggest obstacle to driving data center performance. Without good data it is virtually impossible to manage a data center to achieve the best possible performance.

A typical enterprise data center will have a number of monitoring systems installed. In almost every case you will find a Building Management System (BMS) that provides monitoring and control of the core mechanical systems. Often you will find an Electrical Power Management System (EPMS) along with dedicated monitoring systems for the Uninterruptable Power System (UPS), the Emergency Power System (EPS), and an assortment of other specialized monitoring systems. All of these systems provide very important functions and most are considered essential to the operation of the facility.

So with all of these monitoring systems installed, why has the industry struggled to get the information necessary to drive data center performance? There are a number of reasons. First, the vast majority of these systems are not designed specifically for the unique requirements of a data center. In fact, if you were to compare a BMS system installed at a data center with one installed at an office building or university, you would be hard pressed to tell the difference. While these systems do a fantastic job at managing individual systems and components, they are rarely effective at providing end-to-end performance management data.

Second, while there may be numerous systems installed at a facility, there is typically little or no communication of data across these systems. Data is often stranded “off network” on complex and proprietary systems, with access restricted to the facilities team. Allowing access to personnel who are not trained in the operation of chillers, generators, UPS equipment, etc. can be very risky as these systems have the ability to shut equipment down. Further, making sense of all of the raw data provided by these systems can be a tedious undertaking. While knowing the speed of a chilled water pump is important to a plant operator, this information would not be useful to an IT manager trying to decide where to best deploy equipment.

Third, traditional monitoring and control systems rarely store the data that is being monitored. They operate much like the thermostat in your house that is continuously monitoring the environment and controlling the cooling systems to maintain a stable environment. While it does this job really well, if you wanted to see a trend of the temperature in your house over the past year you would be out of luck. While it is certainly possible to set up and record data for specific conditions on a BMS system, these systems are not designed to record the volume of information necessary to provide meaningful trends of end-to-end data center performance over the long term. A typical mid to large size data center might have upwards of 30,000 data points to be recorded if you include monitoring of power to the rack level. If you were to capture this data at one minute intervals and store it for a year, you would be processing upwards of 15 billion records. Even a modern BMS system would stall with this type of volume.

Overcome those challenges.

First you have to properly define the challenge. Managing data center environmental performance is not about wires, black box devices, monitoring hardware or the vast array of specialty devices that are increasingly being pitched as cure-alls. It is about managing data, all of the data, IT asset and system data; space utilization data; MEP system resiliency and performance data; energy, financial and environmental data. Unless you have access to all of this data your performance picture will be incomplete. While much of this data may already be available it is often stranded across multiple proprietary systems. Identifying what is available and where information gaps exist is core to establishing a best practice performance management process.

Over the past year or so a new class of software known as Data Center Infrastructure Management (DCIM) software has been defined. A number of established data center software players along with a number of start-up firms have aligned themselves with this designation. The intent of DCIM software is to fill the previously mentioned information gaps. As there is not yet a definitive specification for the segment, there is a wide gap of features and capabilities across the various products. Unfortunately there is also a lot of hype around many of the offerings. It is critically important for data center operators who are looking at deploying this type of solution to do the proper amount of due diligence to ensure they find a solution that fits their needs.

Advice for IT and data center managers.

As previously stated, it is important to do your homework. As you begin the selection process take the time to identify the issues that you want the solution to address. Who will be the primary user(s) of the tool? What are they trying to accomplish? It sounds simple, however you would be surprised at how often choices are made based on how “cool” the graphics may look. Conversely you will find organizations that will write a specification that tries to include every last feature they can think of whether they need it or not. They will spend a lot of time and dollars to deploy a solution only to find that few in their organization have the time commitment necessary to use all of the features.

If your focus is environmental performance, energy management, efficient IT equipment deployment, capacity management, etc. then make sure the solution you choose can extract and manage the volume of data necessary to achieve this. Sounds simple, but very few actually can do what they promise. When you’re dealing with power and cooling information that is constantly changing, the data collection and analysis challenge can be staggering. For example, take a small operation of about 1000 servers. Assume that each server has two plugs to monitor. Add in rack level temperature monitoring plus the ability to monitor the core power and cooling infrastructure and you could easily reach 3000 measurement points. Since these points are changing continuously assume that you are going to measure and store a value for each data point at a frequency of once per minute. That’s over 1.5 billion measurements per year! Few systems can manage this volume of data let alone provide intelligent and meaningful analysis both in real-time and when historical trending is required.

Accessibility to the information is another major hurdle for most solution providers. Is the solution a true browser based application that allows full access to anyone on the network? Most are not despite what their marketing material may say. Often you are either forced to install client software on each desktop or you are forced to remote into the application through a browser which often results in limited functionality.

Lastly, how does the application grow with your firm? Can the solution handle the scale of your growth? When adding new data center facilities to the system are you starting from scratch or is it a simple matter of adding the new facility to the existing application? The ability to look at all of the data centers in your portfolio in a clear and consistent manner regardless of scale or design should be a core requirement.

When you look at recent industry surveys it is not unusual to find that a significant percentage of respondents say that they are not measuring their performance. All too often, decisions to build new, or upgrade existing facilities to increase capacity or efficiency, are made based on incomplete and often inaccurate data. Even worse, they are often made under time pressure to deliver new capacity quickly. Having a right-sized environmental program in place not only helps you manage your existing capacity, it ensures that you have access to the information necessary to make smart business decisions when planning for the future.