– Brook Reams, Architect, Core Technologies, Brocade (www.brocade.com), says:

The first wave of virtualization, mainframe computing, is the grandfather of all virtualization techniques. It’s the ultimate virtualized computing platform and continues to be deployed today. For example, IBM hosts Linux virtual machines on its mainframe platforms and this option has grown in popularity extending the mainframe into the 21st century. The next wave, minicomputers in the 1970’s and 1980’s also implemented virtualization techniques (anyone remember the VAX cluster?), but that technology wave is largely absent from most data centers today. Shortly after came the UNIX platforms of the 1980’s and 1990’s. HP, IBM and Oracle/Sun adapted virtualization technologies for their UNIX platforms and these virtual platforms are running in just about every data center today.

The latest wave of virtualization adoption targets the x86 platforms from Intel and AMD. Microsoft, VMware and Zen provide virtualization software using a technique called a hypervisor. The hypervisor virtualizes the x86 hardware creating multiple virtual machines each of which hosts an operating system (Microsoft or Linux) and its associated application. As servers designed with x86 microprocessors started to appear in the data center, they often were used to run Microsoft’s Windows operating system and various compatible client/server applications. Soon thereafter, servers began to proliferate. Client/server means the application is broken up into components each of which runs on its own server. For example, the client (your desktop PC) connects over the LAN (or if you are at a hotel, over the internet) to the data center. That connection is hosted by servers called web servers. The application components are running on another set of servers, application servers, and the data is stored on a database hosted on, you guessed it, database servers. Servers, servers everywhere.

When a new IT project was requested by a business unit, it was common to “add servers” to the datacenter to host the required application components. Compared to UNIX servers (or even mainframes), x86 servers were pretty inexpensive, so a project centric “just add more servers” approach for data center infrastructure made economic sense, at least from a capital cost perspective. Fast forward to today. The result of the “add servers” approach is each of those servers is not highly utilized, takes up space, requires power and cooling and has to be connected to each other (as in lots of cables and network switches).

Well, data centers are running out of space, power, cooling and “spaghetti cabling” has become far too common hindering routine maintenance and preventing efficient cooling of the servers adding even more to the cooling bill. Server virtualization can reduce physical servers by 10 to 20 due to running 10 to 20 applications on a single virtualized server. That’s an enormous reduction in space, power, cooling and cables. This is compelling, and it’s what initially made server virtualization attractive and drove the early adoption of this technology.

Virtualization “does more with less”, improves hardware utilization which lowers operating cost, can dramatically reduce the time to deploy or extend the resources applications use, and make applications more available. Virtualization groups a few servers together into a cluster where each server can host 10, 20 or more virtual machines. A small cluster of servers can replace 10 to 20 times as many physical servers. With virtualization, multiple applications securely run on the same server hardware, but each application thinks it has its own dedicated computer system. Hardware resources needed by the application are monitored and can be automatically adjusted as necessary providing high utilization with corresponding reductions in data center operating cost and less intervention required by server administrators.

There are unintended consequences for IT operations and application SLAs. Server virtualization software vendors built a LAN switch (in software) inside the hypervisor so they could manage virtual server movement between physical servers in the cluster. That solved the immediate problem of managing server mobility, but this isn’t scalable in the long run. As the number of virtual machines per physical server grows, the workload on the physical server when moving a virtual machine begins to negatively impact all the other applications and their SLAs on those servers. And, having a LAN switch inside the server complicates IT operations since the LAN doesn’t end at the Ethernet access switch anymore, it extends into the server cluster. That blurs the simple but effective separation of responsibilities between the server and network administration teams making configuration changes complicated, take longer and increase the potential for configuration mistakes.

Virtualization vendors are working on solutions to these unintended consequences. They have plans to support pass through switching so the LAN switch and its configuration and security policies are once again managed by the network administrator. When virtual machines move, the hypervisor isn’t required to handle that workload spike. Instead, the LAN switch would handle migration traffic eliminating the negative impact of this high volume traffic on the hypervisor and the applications SLAs it’s supporting. And, storage access and management continues to require integration of virtualization with proven shared storage technologies including Fibre Channel, Fibre Channel over Ethernet and Fibre Channel over IP. As more critical applications are virtualized they already use Fibre Channel storage and the migration onto virtual servers should not require moving storage off the SAN. As with the LAN, higher utilization of the server when combining 10 or more applications per server also increases the load on storage traffic. Higher speed storage links such as 8 Gigabit and soon 16 Gigabit Fibre Channel are able to stay ahead of the growing storage IO.

When considering wide scale virtualization projects it is important partner with a vendor that has expertise in data center networking solutions for the access, aggregation and core of the network that are optimized for server virtualization. Brocade has a complete line of data center networking products that allow application SLAs to extend across the fabric ensuring high priority applications are segregated from lower priority ones. We are also working with leading virtualization vendors so their management and virtual server orchestration software can plug into our management framework providing in depth understanding of the storage traffic, end to end, from the virtual machine to the storage port. Brocade has pioneered innovations such as Adaptive Networking, Advanced Performance Monitoring, and virtual port technology to simplify operation and management of storage in highly virtualized data centers.