Virtualisation describes many different IT disciplines, but for John Holley, IS manager at the RNZFB (Royal New Zealand Foundation of the Blind), it’s “the poor man’s disaster recovery”. The RNZFB is the country’s primary provider of vision-related habilitation and rehabilitation services to the blind, deaf-blind and the vision impaired. It has 272 full-time-equivalent staff in 18 offices nationwide. But when Holley joined the foundation around three years ago, he found its IT infrastructure in need of TLC. “I had some pressing issues to address within the organisation as far as security and business continuity were concerned,” he says.
Initially Holley looked at installing fewer and more powerful servers, rather than replacing each one individually. “But then we ran into the normal problems: trying to get different vendor software to run on the same server.” He had used virtualization technology before and decided to take the plunge. “As an IT manager you sometimes have to decide you have enough information and be confident enough in what you’ve seen to make a decision.”
It was around this time that he attended a VMware presentation, and its approach to virtualisation made sense to him, he says.
VMware’s VMotion technology allows IT managers to make dynamic changes without impacting users, providing rapid reconfiguration and optimisation of resources across the virtual infrastructure. Holley found he could get more bang for his few bucks by virtualising, rather than simply swapping old physical servers for new ones, and replacing numerous physical servers with a consolidated server environment: two IBM 445s running VMware virtualisation software. And so the RNZFB became one of VMware’s first major New Zealand VMotion installations.
Prior to implementation, Gen-i (Computerland, as it was) tested the servers in its lab, convincing Holley that the project would be a success. “I put a formal business case to the CEO and CFO. I already had all the capex sitting there, but it was in a lot of different, smaller projects. The advantage was that we could pull it all together and combine it into one project.”
Dubbed ‘Project Concord’ by the RNZFB’s Computerland account manager, it has given Holley a robust server environment and significantly reduced downtime.
Although business continuity was the RNZFB’s original driver, virtualisation has also eased management and exception reporting, simplified the IT infrastructure and removed potential points of failure. And adding a large IBM tape library and a storage area network (SAN) means everything on the RNZFB’s network can now be backed up to one device, avoiding the need for multiple tape units on a variety of servers and the associated network bandwidth problems.
Not hot, but warm
A ‘hot site’ — allowing the RNZFB to continue its network operations in the event of an equipment disaster — would have been the best-case scenario, but Holley grants that’s hardly practical for an organisation of its size. “As a charity, we’re not rich enough to be able to afford a true hot site.”
Instead, Holley settled on dual-path fibre, dual ethernet, dual uninterrupted power supplies and battery backup. “We’ve removed as many of the points of failure in our environment as we possibly could, so I know we can pretty much keep everything going as long as one of the physical servers is running.”
If both servers were to die it would be a different matter, Holley acknowledges, but he considers the likelihood of that happening to be low. “VMware VMotion technology lets us bring one of the servers up on the other server fairly easily — and, in fact, with VMware 3 that’ll happen automatically, which will be even better.”
Dead in the water
Some IT experts consider an ideal system to be one in which each application runs on a separate physical server, as this provides IT managers with maximum control, minimal interaction and allows them to run multiple operating systems.
For most organisations the cost of this is prohibitive, but all of the RNZFB’s systems were running on individual, physical servers, including Jade Community Care, an application mission critical to the organisation because it’s what allows staff to deal with foundation members. “If that server failed it would be fundamental to our field staff,” says Holley. “We have staff using it around the country and we really would be dead in the water.”
In order to mitigate this risk, he virtualised the Jade production system. Within the new environment, if one of the physical servers dies, Jade Community Care can quickly be called up on another physical server.
Upbeat about uptime
For Holley, the fact that the virtualisation is invisible to most staff is gratifying, if slightly frustrating. “Most of the foundation people didn’t even notice,” he says. But it’s a different matter when it comes to Holley’s IT team: “My staff love it. It’s a simple, straightforward environment — they’re not reactively managing servers, we’re into proactive management now.”
As a service unit to the foundation, Holley’s key measure of success is network uptime, which he describes as having been “phenomenal” since go-live. “I would say we’re well past 99.5% uptime, which, for an organisation like ours, is pretty good.”
The RNZFB is now looking at software tools that will allow it to monitor its multiple virtual servers separately from its physical environment.
Fifteen steps to successful server virtualisation
PROJECT CONCORD’S DRIVER was business continuity; the enabler was virtualisation in the form of consolidated servers.
These are the main lessons learned by John Holley, IS manager at the Royal New Zealand Foundation of the Blind (RNZFB), in the course of his VMware virtualisation, Project Concord:
1 Research the market, but then trust your ability to make decisions based on what you’ve seen.
2 Don’t let unfamiliarity restrict you to using virtualisation only in your development and test environment. Get your mission-critical applications on there.
3 Use virtualisation to consolidate funds earmarked for capital expenditure on a variety of smaller technology projects into one major project.
4 Examine the risk of outages and downtime to your business through risk analysis. Virtualise what is mission critical first, so as to provide redundancy.
5 Don’t under-spend on your SAN, but …
6 Mix and match your storage. While you may need fast storage for your databases, do you really need it for file storage? Go for lowest cost, highest performance.
7 Focus your IT team on day-to-day issues and providing legacy support. Have your integrator work on the migration, installation and configuration of your virtualised systems.
8 Have business analysts test the systems before your new virtualised environment goes live.
9 Treat your virtual servers as part of a processing pool, rather than as physical devices.
10 Think about capacity. Aim for at least two servers with the same RAM and processing power.
11 Budget for more storage than you currently need. Remember, provisioning new virtual machines is simple.
12 Have your integrator provide professional project management, to ensure your project comes in on time and budget.
13 Use uptime to measure the success of your project. Ensure you have a system in place for measuring uptime now, so you have a sound basis for comparison.
14 Look at software tools to monitor your multiple virtual servers separately from your physical environment.
15 Use virtualisation to isolate at risk devices from the rest of your IT environment — if a virtual server crashes, it doesn’t have to stop anything else.
What is virtualisation?
SOME FIND ‘the abstraction of resources’ intangible as a definition of virtualisation, and who could blame them? After all, the word is used to describe everything from the partitioning of disk space (so-called hardware virtualisation) to the optimising of processing power and storage.
One UK IT manager CIO spoke to, Gareth Jones at Grampian Prepared Meats in Gwynedd, Wales, professed not to know a lot about virtualisation and expressed a degree of scepticism, likening it to the “revolution” of drive-mapping when Windows 95 was released and the hype around thin-client applications. “It seems to me to just be doing the whole load-balancing act that techies normally do.”
Kernelthread.com’s ‘Introduction to virtualisation’ provides perhaps the easiest-to-understand definition: “A framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies, such as hardware and software partitioning, time sharing, partial or complete machine simulation, emulation, quality of service and many others.”
However, virtualisation can also be used to do the opposite; for example, to make a number of hard disks appear as a single logical disk, through a virtualisation layer.
Its main benefit is in optimising systems, and today there are three main areas where virtualisation has rapidly gained enterprise acceptance:
1 Network virtualisation, by which the available resources in a network are combined, splitting up the available bandwidth into independent channels that can be assigned to a particular server or device in real-time and masking the complexity of the network by separating it into manageable parts (just as partitioning a hard drive makes it theoretically easier to manage your files).
2 Storage virtualisation, by which physical storage from multiple network storage devices is pooled into a virtual, single storage device managed from a central console (commonly in the form of a storage area network).
3 Server virtualisation, by which server resources (individual physical servers, processors and operating systems) are masked from users, increasing resource
sharing and utilisation and providing the capacity to expand.
It has to be said that well-intentioned gobbledegook from IT suppliers and consultants with a vested interest in selling virtualisation technologies doesn’t help to make the discipline any easier to understand. Alessandro Perilli’s ‘What is virtualisation?’ webcast claims to be for both a “technical and non-technical audience”, but succeeds mainly in obfuscation, thanks to a curious English translation badly read by an overstretched text-to-speech simulator.
The analyst firms, too, are at cross purposes in their virtualisation advice. In 2004 Gartner predicted companies could make massive savings by using virtualised networking in favour of a selection of hardware devices. Since then, it has also recommended virtualisation as a way to cut laptop lifecycle costs. But the analyst organisation cautions IT asset managers and IT procurement teams to work with software vendors to establish licensing models that take virtualisation into account.
Forrester Research, meanwhile, focuses on server virtualisation, which it says has quickly surpassed other forms of infrastructure virtualisation, including that for storage and networks. Globally, says Forrester, 26% of organisations surveyed have implemented server virtualisation technologies, and 8% more will pilot it this year.
The technology used by VMware to create virtual machines for running PC operating systems is complex and can incur performance overheads if not correctly implemented. However, a 2006 Goldman Sachs IT spending survey of 100 IT executives, half of them CIOs, confirmed VMware to be the most-purchased virtualisation technology in the US, and recent NZ roll-outs, such as the RNZFB’s server and storage virtualisation project, suggest it isn’t being ignored here, either.
Virtualisation definitions and resources
Virtual Strategy Magazine: news and information about virtualisation technologies
Wikipedia’s definition of virtualisation includes some helpful external links
Kernelthread’s ‘Introduction to virtualisation’
‘Making servers virtually better’: a ZDNet article on maximising
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.