Len Rosenthal, Vice President of Marketing with Virtual Instruments (www.virtualinstruments.com), says:

Why is Virtual Infrastructure Optimization useful in today’s enterprise data centers? Why should data center and IT managers care about it? How can they benefit from it?

Virtual Infrastructure Optimization is about optimizing the performance, availability and utilization of both the physical and virtual IT infrastructure. Virtualization, both for servers and for storage, is becoming widespread and somewhere between 30% to 40% of IT applications are now virtualized within a typical data center. For many organizations, this is where “virtual stall” enters the picture. The low-hanging fruit of IT-controlled applications (e-mail, file services, test & dev, etc.) have nearly all been virtualized. Now comes the hard part of virtualizing the business-critical, revenue-impacting applications such order processing, online commerce, CRM and ERP. These are I/O intensive applications that usually have strict performance and availability service level agreements (SLAs). To date, most IT managers have been very reluctant to virtualize this class of application since they lack the tools to ensure they can meet those SLAs. Hence, “virtual stall” sets in and aggressive deployment of virtualization stops.

Where should Virtual Infrastructure Optimization rank in terms of overall priority in the data center?

Virtual Infrastructure Optimization typically is not the first thing on the minds of most IT managers. They primarily are focused on keeping their current infrastructure up and running and secondarily on thinking about ways to expand the use of virtualization. As IT managers try to virtualize business-critical applications they inevitably run into performance and availability challenges. What’s the easiest way to get around these problems? It’s to throw more hardware at the problem – to massively overprovision the infrastructure. There are 2 problems with this. First, it is very expensive to run your infrastructure at 3X to 5X its needed capacity and resources. Second, as more hardware and software infrastructure is deployed, the higher the probability of a outage. Common sense and the “laws of physics” tell us that the more devices that are deployed, the higher the likelihood of a device failure. What is needed is for IT managers to think proactively about how they can cost-effectively optimize the performance and availability of their data center. They need to be deploying tools and processes to constantly monitor, measure, and analyze what is going on in the infrastructure when they are architecting their new virtualized infrastructure, not after it is deployed.

What are the biggest challenges for data center and IT managers when it comes to Virtual Infrastructure Optimization?

The biggest challenge is getting true real-time transaction visibility into the infrastructure. Without real-time visibility, IT mangers can’t optimize performance, utilization and availability. According to VMware, over 80% of server virtualization deployments are installed with Fiber Channel SANs. The problem with FC SANs is that unless you have the right monitoring tools, you can’t see inside the SAN. Unlike IP networks that were designed to be monitored (IP traffic analysis), FC SANs were not designed for this. With virtualization wreaking havoc on the SAN, such as creating massive SCSI reservation conflicts and suffering from improperly set SCSI queue depths, it is critical to see exactly what is happening inside the SAN. Getting real-time transaction visibility is essential to resolving problems quickly and, more importantly, proactively avoiding performance problems and outages.

How can data center and IT managers overcome those challenges?

Data center managers can overcome these problems by deploying real-time (sub-second) monitoring and analysis tools that track every transaction flowing through their SAN. Virtual Instruments VirtualWisdom is one such solution. These tools add the critical missing I/O performance and utilization data that VMware vCenter can’t provide as it only has access to CPU and memory utilization data. Without SAN I/O data on I/O-intensive applications, virtualization load balancing decisions will typically cause performance problems and lead to poor resource utilization, rather than solving existing problems.

Advice for IT and data center managers that have a plethora of similar solutions to choose from:

Unfortunately, there are lots of tools available from storage and server vendors as well as other independent companies that claim to help solve some of these problems. For example, there are over a dozen virtual server performance monitoring tools, but nearly every one of them only looks at server metrics to assess performance, they have no visibility into SAN I/O. They are fine for CPU and memory intensive applications, but they can’t be effectively used for I/O-intensive applications such as those relying on databases like Oracle and DB2. There are also many tools from the storage vendors that look at storage performance and utilization. These are very valuable tools, but they are primarily vendor or device-specific. They don’t have a real-time view of transactions from the server to the switch to the storage array. They can tell you that you have an I/O problem, but they can’t tell you the source of the I/O problem unless it’s inside of the arrays. They simply don’t have a comprehensive view of real-time performance and can’t track every transaction. IT managers need to look at tools that are, vendor independent, true real-time and that have a comprehensive cross-domain view of the physical and virtual I/O infrastructure.