There's probably one humming away in the depths of your building right now, and chances are it is causing headaches. As the electronic heart of an organisation, a data centre underpins virtually every daily activity. From storing critical data to housing application servers, it is an expensive but necessary resource.
It is also a resource facing some big challenges. Burgeoning demand for computing power and rising tides of data are combining and causing existing facilities to groan under the pressure. Finding the most effective way to overcome the issue is topping the "to-do" lists of IT managers and chief information officers.
One problem is that most of the existing corporate data centres were not designed to house the type of hardware now being rolled out by organisations. The current generation of power-hungry blade servers and massive storage disk arrays need far more resources than the boxes they replace.
Gartner analyst ¿Rakesh Kumar says most data centres built in the past 10 years are "functionally obsolete", relative to the power and cooling requirements of new equipment.
He says research has found that more than 50 per cent of global 1000 organisations will have to modify their existing data centres by 2012. At the same time, the total energy consumed by the world's data centres will double between now and 2011.
"Putting aside vendor and press hype, the real issue is that during the past three years, users have increased their need for data centre space by 5 to10 per cent a year," Kumar says. "This is in addition to about five years with relatively no investment in new data centre facilities."
It means many organisations will have to spend a significant amount of money on their data centres in the short term. Deciding how these investments should be made is no easy task.
Queensland University of Technology infrastructure services manager Warren Fraser is well aware of the challenges an ageing data centre can present.
To service the needs of about 40,000 students and 5000 staff, QUT has two data centres. The primary one is located in a 30-year-old building within the university's Brisbane city-based campus. This is supported by a second, smaller facility in the inner-Brisbane suburb of Kelvin Grove.
"We have just commenced a project to design an upgrade to the power distribution and back-up capabilities in our primary data centre," Fraser says. "That is a significant and very important project for us."
The task of upgrading is both expensive and complex because of the age of the primary facility. Equipment is being installed to monitor power consumption, and cooling equipment has been updated to cope with the growing load.
The newer, secondary centre is designed both as a disaster-recovery site for the university as well as a facility to support a planned server consolidation project. The project, which started six months ago, involves reducing the number of servers spread around the university and centralising them within the data centre.
QUT is also using Oracle Clusterware software to create a single virtual data centre from the two physical sites. The goal is to have all the key applications and data replicated in both sites, allowing seamless change should a problem strike either one.
A recurring theme for many organisations staring down the barrel of a data centre upgrade is energy. As they opt to replace existing servers with new-generation blade systems, organisations have to deal with feeding and cooling their new computer equipment.
One of the biggest selling points of blade computers is the reduced floor space they occupy. By sharing facilities in a chassis-based infrastructure, individual blades can provide the computing power of much larger boxes.
But compressing this power into smaller spaces brings with it the challenge of cooling. If they are not kept under about 21 degrees at all times they start to malfunction. The high-density nature of a fully laden chassis means existing cooling systems within data centres cannot cope with the amount of heat produced.
Organisations must choose between spending up big on air-conditioning systems (and paying higher power bills) or spreading their chassis and racks out, negating the space-saving attributes of having blades in the first place.
Data centre and storage services manager at Hitachi Data Systems, Tim Munn, says putting blade installations into an existing data centre can quickly lead to problems that cannot easily be controlled using existing cooling systems.
Munn is responsible for three data centres - a 550-square-metre facility in Brisbane and two 350-square-metre facilities in Canberra. These provide for a range of corporate and government organisations.
"We've been seeing the impact of new kit, particularly in terms of density and weight, power consumption and heat, which a lot of existing data centres are struggling to cope with," Munn says. "You might be talking about the same number of servers but when you put them in a chassis as blades, the heat output means you are facing hot spotting issues."
Munn says while the classic data centre design of alternate hot and cold isles is still valid, operators must now either disperse their equipment or add supplemental cooling. This can be at a significant cost.
Another limiting factor is supplying sufficient power to run the new, denser equipment. Where traditionally, data centres would happily hum away using about 1000 watts of power per square metre, new server equipment and cooling can demand more than four times that amount.
Another challenge is the physical weight of equipment being deployed. Getting more kit into a room might be one thing, but being sure the floor can support it is another. With many data centres located within high-rise buildings, slab ratings and structural checks become important.
So, with such weighty and expensive challenges facing operators of in-house data centres, it comes as little surprise that there's increasing interest in outsourcing the whole operation.
Around Australia, and the globe, companies are investing millions of dollars in greenfield data centres. They are cashing in on rising demand by either renting floor space or providing on-demand computing facilities.
Simon Durkin, director of sales at data centre operator Interactive, says his company has seen a dramatic increase in demand for outsourced facilities.
"One of the main drivers is power," he says. "Companies might have the space but they don't have the power coming into the building. They might be looking at the upgrade of a substation, which can cost anywhere between $500,000 and $1 million."
Interactive recently opened a new data centre facility near Brisbane, and operates others in Melbourne and Sydney. The company expects customer demand to continue growing.
Durkin says organisations thinking about outsourcing their facilities can come up against timing issues. Many have to predict what their requirements will be 12 or 18 months out, then look for a facility that will have capacity coming online around that time frame.
He says many companies choosing to outsource want to rid themselves of the cost and complexity of managing a data centre. He points to one customer, a large international car company, that has handed over the running of its underlying IT infrastructure environment, but kept control and management of its key software applications.
"We do all the behind-the-scenes stuff," Durkin says. "They don't want to worry about generators, UPS, security and all those factors. They just want to focus on their applications."
Such an approach can be attractive for organisations that don't want the cost associated with infrastructure management but still need to get a competitive advantage from their applications.
Another key driver for many companies is a desire to move their data centre facilities away from central business district locations. Durkin says this could be a need to free up floor space, or a security issue.
While potential terrorist attacks in Australia's major cities are a concern, a more likely disruption is from cuts to power or telecommunications links. By moving data centres away from CBD areas, Durkin says, companies can be more confident their operations will continue should anything happen in the city centre.
Hitachi's Munn says another reason to outsource is a desire to have a physical separation between a core data centre and a company's disaster recovery facility. Previously, backing up to a different part of the same building or campus was acceptable, but now this would not be considered. In these cases, an organisation may opt to keep its existing data centre in place but use outsourced resources to provide DR and back-up capabilities.
There is another major advantage; as electricity prices rise, being able to spread the costs of running a large centre across a number of users can lessen the pain.
Boxing clever on campus
For one educational facility, the decision about whether to keep its data centre facilities on campus or outsource them came down to one factor: Cost.
The IT infrastructure at Melbourne's Box Hill TAFE college had been supported by a data centre that was more than 20 years old and lacked the physical features needed to support future growth.
For TAFE IT services manager, Chris Tayler, it was not a case of whether the facility needed to be replaced, but rather finding the most efficient, cost-effective way of doing it.
Tayler says the ageing centre had reached its capacity and there was no way of expanding it to cope with rising demands from staff and students. After looking at alternatives, it was decided to design and construct a new facility on the campus.
"It was a major challenge finding a suitable location for the new centre within our current buildings," Tayler says. "We had to work around what could be moved without removing existing classrooms or income-producing areas."
He says outsourcing was considered but rejected after close examination of the costs associated with facilities rental and data transmission.
"Had I been able to find a facility close enough that met my needs, it would have been a no-brainer," he says. "But given that the closest suitable facility was a few kilometres away, the cost of the fibre and the infrastructure in the ground as well as staff travel was going to kill me."
Instead, a floor within a campus building was earmarked and work began on fitting it out for the new centre. Power and cooling infrastructure was designed and installed, as were racks of new Dell blade servers and a new storage area network.
The new facility, which supports about 3500 desktops, has roughly 100 physical servers and some 15 terabytes of data storage capacity.
Tayler says he is hoping to get better performance from his hardware by using virtualisation software from VMware. "I was keen to have larger servers doing multiple things rather than the one-for-one situation [we had]," he says.
At present, the 100 physical servers are supporting about 160 virtual servers, and Tayler expects to extend this further in the future. But, rather than reducing the number of boxes, the plan is to allow the existing hardware to support growing user demands.
"We could look at reducing the number of servers by about 60 per cent, but [demand] will probably grow 60 per cent at the same time," he says.
"The number of actual boxes won't change, but we will just be doing 60 per cent more with them."
The cooling challenges faced in the old centre have been overcome, combining better air-conditioning design and more power-efficient bladed hardware.
"I was aiming for a situation where, if something failed, I could easily pull it out and replace it. Blades now allow us to do this," Tayler says.
As the main centre is operational, he is turning his attention to a secondary facility about three blocks away. This centre is used as a warm recovery facility but Tayler plans to make it a more integral IT resource.
Fairfax Business Media
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.