CIO

Ghosts in the machine

Virtual machines are changing the way IT thinks about and uses x86-based servers in the data centre.

What started simply as a way to consolidate older, out-of-warranty servers has quickly turned into a new infrastructure building block in Qualcomm's data centre. Virtual machines (VM) have risen to become a corporate standard for deploying and managing x86-based servers at the semiconductor maker. "We saved in the seven-figure range by not buying servers. Going forward, we're continuing to consolidate, and we're pushing everything we can into the virtual space," says Norm Fjeldheim, Qualcomm's senior vice president and CIO.

Server virtualization software allows applications to sit side by side on the same physical server, yet remain completely isolated, both from one another and from the underlying hardware. Applications within a VM see a dedicated operating system and server. Under the hood, however, a VM monitor allocates a share of the physical server's processor, memory and I/O resources to each VM.

Virtualization breaks the link between the hardware and the common requirement that applications run on dedicated servers. Adding a virtualization layer adds processing overhead that can range from an increase of a few percentage points into the double digits. However, most servers are significantly under-utilized, so consolidation benefits are often dramatic.

At Qualcomm, which uses VMware's ESX Server virtualization software, the ratio of VMs to physical servers has been as high as 18-to-1. Some 384 servers now run in VMs that reside on just 35 dual- and quad-processor machines. In all, 40 percent of the x86-based server applications at Qualcomm run on VMs, and that will increase to 50 percent in the next six months, says Paul Poppleton, senior staff engineer at the company.

As application servers continue to scale out, the proliferation of x86-based servers has outstripped the ability of administrators to manage them, says Nigel Dessau, vice president of virtualization solutions at IBM. Businesses today have seven times more servers than they did just 10 years ago, but the cost of managing them is nine times higher, he says. "Virtualization can start tackling that problem," Dessau adds.

Once dismissed as a neat hack that in-house developers used to quickly test software within multiple virtual environments, virtualization technology has taken hold for tasks ranging from consolidation to business continuity and even virtualized symmetrical multiprocessing (SMP) systems.

Early concerns about application support are fading. A few years ago, software vendors baulked at supporting applications running within VMs. Bowing to user demand, today larger software vendors such as Oracle and Computer Associates support products running within VMs, and vendors of smaller, niche-market programs are increasingly following. "We're pushing for all of our suppliers to support VMware," Fjeldheim says.

Three quarters of ESX Server deployments are in data centres, according to VMware, an EMC business unit. Market researcher IDC expects strong growth in VM software between 2004 and 2008, with sales growing 75 percent, to $US261 million, over the four-year period. Those numbers don't account for the expected growth in the adoption of Xen, a free, open-source virtualization program for Linux and BSD Unix servers that's supported by California-based start-up XenSource.

Disaster avoidance

Now that virtualization technology has proved itself as a consolidation tool for the data centre, organizations are pursuing new uses, such as VM portability. An entire VM can be encapsulated in a single disk-image file and quickly deployed on any hardware running the same virtualization software.

"All that's necessary is to copy the file to a disk or tape or send it down the network," says IDC analyst Dan Kusnetzky. "We've seen people use it as a software distribution mechanism." That portability aspect makes VM technology attractive for business continuity as well.

For example, travel consolidator Fun Sun Vacations Ltd first used Xen VMs to consolidate its Linux-based Web application servers. Now it uses VMs as a disaster recovery mechanism. Because the virtualization software is abstracted from the hardware, manager of information services Derek Larke says he can quickly move a critical VM that handles credit card transactions onto any available server in the collocation data centre.

"Usually, at the time of disaster, you are working with blank hardware with nothing on it. We imaged a Xen [VM] and brought it to a blank server, and we had it up and going in about 15 minutes," he says. Previously, Larke notes, "applications that originally would have taken too long to implement in the event of a disaster would have to be preconfigured and running at the collocation site on their own hardware". Now, a single machine can serve as a fail-over machine for multiple VMs and can be made available for other tasks until needed.

Qualcomm uses VMotion, a management utility from VMware that can slide running VMs onto a new physical server with minimal disruption. "We've been able to move processors onto a different physical environment in scenarios where previously we would have lost the processes. Our service levels are up," says senior staff engineer Paul Poppleton.

Robert Armstrong, director of technical services at hospitality services vendor Delaware North, says the ability to move VMs between physical systems is also critical for server maintenance in a virtualized environment. Armstrong used VMware to host both Windows and NetWare VMs, reducing the data centre footprint from 12 racks to three. "The maintenance windows shrink dramatically when you have eight or nine virtual machines on one physical device," he says.

Larke says VMware's management tools are the most advanced. "Hands down, VMware is the best out there, the way it manages, the way you can throw around virtual machines," he says. But Larke says ESX Server, with management software and support for 14 dual-processor servers, would have cost $173,000 using products from IBM. Xen requires more knowledge to run properly, but it's free. Given the cost difference, the tools with Xen were "enough for what we need to do", Larke says.

Scaling up

While the most common use of virtualization technology is to break down the resources of physical servers into a series of VMs, it's also possible to go the other way, aggregating server CPUs and even sub-CPU VMs into a single, virtualized SMP system.

Carmine Iannace, manager of IT architecture at Welch Foods, says the one thing he hasn't virtualized is his collection of Oracle database servers, which need at least four processors. VMware currently limits VMs to two processors each, so he is waiting for quad-processor support, which the vendor plans to ship later this year.

VFe, a product announced by start-up Virtual Iron Software in the US, will support up to 16 processors per VM. The system will initially support only Linux VMs; its 16-processor limit reflects the maximum SMP configuration currently supported by Linux. VFe uses high-speed, low-latency InfiniBand host bus adapters and switches to interconnect the physical processors. But Iannace worries that taking this approach would add too much expense for his application. InfiniBand "has to become a commodity item to be useful", he says.

Another product, Virtuozzo, from SWsoft, supports virtual SMPs as large as the physical host system. It can support Linux or Windows Server 2003 VMs -- but not both -- on the same physical hardware. Jack Henry & Associates, a developer of software for banks, is testing Virtuozzo to meet both scale-up and scale-out requirements. The company's system architecture includes several components and requires multiple servers. Since everything runs on Windows Server 2003, Jack Henry & Associates can leverage Virtuozzo VMs to consolidate the system onto fewer servers, including virtual SMPs that range from two to eight processors.

"In banks, real estate is at a premium, so the footprint of the hardware is a huge consideration," says Barry LaLone, server platform architect. Because Virtuozzo's technology doesn't replicate the entire operating system within each VM, the complete system -- 12 VMs in all -- can run using just two Windows Server 2003 licences. With VMware's scheme, LaLone says, he would have had to pay for all 12.

Virtual data centre

Ultimately, virtualization will become just a standard layer of the infrastructure stack, predicts Karthik Rau, director of product management at VMware. IBM has its own virtualization technology for its mid-range and mainframe systems, and Dessau says the company is building tools for a world where IT must manage a mix of VMs running on mainframe, mid-range and x86 processors, and where "islands of virtualization are interconnected across the enterprise". Tools such as Tivoli will manage these resources and dynamically configure and provision virtualized resources as needed, Dessau says.

But for most users, the immediate benefits are what matters. "Virtualization lends itself to virtual firewalls, application isolation, all kinds of neat things," says Welch's Iannace. "It's a very cost-effective, efficient and reproducible approach."

Page Break

Under the Hood:

The soul of a virtual machine

Although virtualization tools have similar objectives and use a virtualization software layer, called a resource manager or hypervisor, to manage virtual machines, the basic architectures vary.

In software-based VMs, the resource manager sits on top of a host operating system and juggles the requests of multiple guest operating systems loaded on top of it (see diagram). Microsoft Virtual Server 2005 and VMware GSX Server follow this model.

Other products, such as Xen and VMware's ESX Server, run in a hypervisor that sits beneath the guest operating systems and the hardware. Because the software layer sits on the "bare metal", these are sometimes referred to as hardware VMs. Direct contact with the system hardware allows the VMs to work more efficiently.

Other products, such as Solaris Containers in Sun Microsystems' Solaris 10 and SWsoft's Virtuozzo, also use a software-based model but eliminate guest operating systems in favour of "virtualized operating systems", or application containers. Each application appears to have the operating system to itself, but in fact, core elements, such as the kernel and system libraries, are shared. This approach is more efficient than running a full-blown guest operating system in each VM and saves on software costs because one operating system licence can be used for all VMs on a physical server. But there's a catch: virtual operating systems can support only applications that will run on the host operating system.

IDC analyst Dan Kusnetzky says each approach fits a different need. "Those who need power will want approaches that are very lightweight. Others are more concerned about optimizing resources," he says. "A single approach will not fit the need everywhere." - Robert Mitchell

---pb--

Server Virtualization

THE GOOD

Consolidation: Users report consolidation efficiencies ranging from a few VMs per processor to as many as 18. Qualcomm consolidated 384 server applications onto 35 physical servers.

Server Deployment: Application servers deployed as VMs can be set up quickly. "It used to take eight hours to put a new application on the data centre floor. With virtual servers, it takes anywhere from 15 to 20 minutes," says Bob Armstrong of Delaware North.

Business Continuity: VMs are hardware-independent. Disk images of a VM can be quickly copied to another server in the event of a hardware failure or for routine maintenance -- without disrupting running processes.

Software support:An increasing number of software vendors now support their products when running on VMs.

THE BAD

Single point of failure: A hardware failure on a single physical server can take down multiple virtual servers. Delaware North raised its hardware-support contract from a four-hour response to a one-hour response.

Licensing: Software vendors may charge per CPU -- and per VM. In some systems, users must license an operating system for the host and for each VM.

Scaling up: Current products don't work as well for processor-intensive applications or those requiring heavy I/O.

Overhead: Virtualization adds a software layer that can soak up processing cycles. Users and vendors say overhead can range from 2 or 3 percent to as high as 20 percent, depending on the product and application.