Jay Seaton, CMO, GlassHouse Technologies, says:

One of the biggest conundrums enterprises face in the current ‘IT jungle’ is how to properly house their enormous volumes of data. With 2.5 quintillion bytes of data being developed every day, organizations of all sizes are under pressure to manage data growth through strategic infrastructure deployment and more effective utilization of the infrastructure already in place. However, when it comes time for IT departments to weigh their infrastructure options, namely in-house data centers, colocation facilities and/or cloud services, they often come to realize there’s no one single model that can fit their diverse business demands. That’s why, instead of a one-size-fits-all data center strategy, IT should adopt a holistic approach to data center management that leverages the benefits of all three options. This will ensure IT provides the particular services and infrastructure organizations need, without paying for anything they don’t.



Before adopting a multipronged approach to data center deployment, it’s crucial to first understand the underlying infrastructure challenges that result from a data centers’ evolution. As new technological advancements such as cloud and virtualization mature, they offer great promise but make it increasingly difficult to maintain data center service levels, cost control and security. Not only is the volume of data exploding, but data now comes from a variety of new sources, users have far greater agility in procuring and provisioning new services, and security is more challenging.  And data center managers must continue to manage downtime, better utilize existing resources (infrastructure and people), balance the need for power and space requirements, and implement proper security protocols, among other essential infrastructure components. However, once organizations properly assess these challenges, they will be better armed to properly implement their data center strategies.

For instance, one of the first things to include in a modern data center strategy is how much data and time organizations can afford to lose. Well-defined Recovery Point Objective (RPO) and Recovery Time Objective (RTO) measurements, which dictate the average cost per occurrance, will help organizations manage their downtime and risk thresholds. Additionally, mitigating resource consumption should be a priority in data center management. Server and storage virtualization can circumvent the need to buy additional resources and alternative energy sources, like solar, wind and geothermal, will maximize existing resources as well as increase efficiency. Additionally, many legacy systems are still highly effective and can be an alternative to investing in new technology. Therefore, IT should look to optimize existing systems, which will keep costs down and allow IT to focus resources on their core business.

With a well-evaluated IT strategy underway, organizations can then take the time to evaluate the pros and cons of various data center models. For example, while in-house facilities provide total control over information and are potentially more secure, they are expensive to maintain and very labor intensive. Cloud services are scalable and inexpensive, but there’s a significant risk of downtime. And while colocation facilities provide increased physical security and infrastructure control, they come with CapEx and OpEx burdens and access restrictions. At this stage, IT will quickly realize that a tiered approach, implementing all three models according to their strengths, is the only way to reap the full benefits of in-house, cloud and colocation offerings without suffering their drawbacks.

As explained in a recent whitepaper, one effective approach to data center tiering would involve tier-1, or critical applications and data, residing in in-house and colocation facilities, which provide increased security and fast access to important information. Meanwhile, to reap the benefits of the public cloud’s scalability and flexible pricing without worrying about huge security risks, IT could allot e-mail, back office applications and other tier-3 priorities to a cloud services model. Tier-2, or custom applications made only for the business, can be reserved for the private cloud to achieve middle-ground cost savings and satisfy specialized security policies. This is by no means the only approach to tiering—the exact approach organizations should take depends on their unique business structures and IT objectives.

The bottom line is, there’s no single tiering approach that will fit all business needs, just as there’s no one data center option that will meet IT’s various demands. Often IT will find that cloud or colocation environments become an extension of an existing in-house data center given how rapidly data is being developed. Whatever the combination might be, the time spent evaluating organizations’ infrastructure needs upfront will ensure that they are prepared to accommodate technology’s evolution and turn raw data into actual business value.