Menu
Menu
Product Review: Bantam blade battle

Product Review: Bantam blade battle

The current generation of blade servers is finally ready to fulfill the promise of offering state-of-the-art computing power and connectivity at a fraction of the size of a conventional 1U or 2U, rack-mounted, general-purpose server. And it's about time, too.

The current generation of blade servers is finally ready to fulfill the promise of offering state-of-the-art computing power and connectivity at a fraction of the size of a conventional 1U or 2U, rack-mounted, general-purpose server. And it's about time, too. The first generation of blade servers was optimized to maximize rack density at the expense of performance and capacity. So until now, blades have been best-suited for niche applications. You could pack a lot of servers into a rack all right, as long as you didn't mind if they had slow Pentium III processors, limited storage space, and few networking options.

But that's all changed, as you can see in today's crop of high-horsepower blade servers, which offer high performance in a smaller, easier-to-deploy form factor.

A blade server, due to its smaller size, will never provide all the functionality of a 1U server, but today's blades come close. Yes, there are still trade-offs -- even the most modern blade servers lack the expandability and high-availability features of a standard server. However, they make up for it in convenience and density.

The best blade applications require that a large number of servers be packed into a tight area, and the servers themselves must be considered as replaceable units. Ideal applications include Web server farms, hosting environments, and database clusters.

They're also good for field offices. A single small rack can contain dozens of servers that can be connected or replaced by nontechnical staff.

When you add external storage, either with Ethernet-based NAS or FC (Fibre Channel)-based SANs, a dual-processor blade can be quite a powerful server. All of the servers we tested can be connected to FC-based SANs with the appropriate daughter cards installed in each blade -- another difference from previously tested blade-servers.

Blades may also find a new home in SOA (services-oriented architecture). But to accommodate the move toward SOA, blade designers must continue to simplify their automated and managed deployment tools, so individual server blades can become invisible, plug-and-play components of a datacenter.

Let the duel commence

InfoWorld invited four vendors -- Dell Computer Corp., Hewlett-Packard Co., IBM Corp., and RLX Technologies Inc. -- to participate in a comparative review of blade servers. This test focused on dual-processor Xeon blades, because that is the practical minimum hardware requirement for running current operating systems, such as Windows 2003 Server, and enterprise-class applications, such as BEA Systems Inc.'s WebLogic, IBM WebSphere, Oracle, or Microsoft Corp.'s Exchange.

Of the four, Dell was unable to participate because the company is currently shipping only a Pentium III-based blade, reviewed earlier this year.

RLX's ServerBlade 2800i squeaked into first place, due largely to the quality of its management software. IBM, with its eServer BladeCenter HS20, came in a close second, thanks to its having the highest density in this class, as well as superior hardware design. HP's ProLiant BL20p G2 boasts the fastest processors, but it's larger than competitors, and its blades are pricier, negating many of the benefits of the blade architecture.

RLXServerBlade 2800i and System 600ex

RLX's solution, ServerBlade 2800i and System 600ex, delivers clever advantages, most notably its outstanding Web-based management of all physical components, a powerful deployment application that runs on its own dedicated blade server, and a nice touch, small LCD panels on each blade that display system status.

The company's hardware has matured beyond the bare circuit-board design of its previous offerings. The new System 600ex enclosure (incompatible with previous RLX models) is 6U (10.5 inches) high, and provides the cooling for the 10 server blades. Built into three hot-swap power supplies, three hot-swap fan modules in the rear of the enclosure (with a few more at the front) push a lot of air through the system. The effective size of each server is 0.6 rack units, leading to a maximum of 70 servers in a 42U rack.

The 600ex enclosure also has a passive backplane which can be accessed not only by the blades, but by as many as four separate control modules that plug into the back. The test system contained two modules, one to actively manage the hardware and the second to provide pass-through from each server's network interfaces to a bank of RJ45 connectors. The company says that it's working with partners to build a Gigabit Ethernet switch module for introduction later this year. That would be an improvement.

The RLX Server-Blade 2800i servers themselves have dual 2.8GHz Xeon processors and a 533MHz front-side bus, but are short on storage, using only a single 60GB 2.5-inch ATA hard drive, and come with only 1GB of memory installed. RLX also offers 3.06GHz processors and 100GB drives.

The servers also offer dual Gigabit Ethernet interfaces, a Fast Ethernet dedicated management port, and a daughter-card slot that can handle an FC host-bus adapter. One of the ServerBlade 2800i cards in our test, which was preloaded with Windows 2000 Server, had KVM capabilities through a dedicated front-mounted VGA connector and a USB port for keyboard and mouse. The other blades, preloaded with Red Hat Inc. Linux, did not have the KVM hardware.

RLX's secret sauce is manageability. The management processor inside each blade and each 600ex enclosure gives solid Web-based control of every aspect of the hardware, down to power-cycling individual components.

The company's Control Tower XT app, which runs on a dedicated blade, automates OS and software deployments, turning a blade into a no-brains-needed, field-replaceable unit. When the system is properly configured, just pop out a bad server and slide in a new one; Control Tower will automatically deploy a software image and bring the new server online.

The rest of the industry could learn a lot from RLX's approach to server management. By contrast, RLX still has a bit to learn about hardware design. The RLX blade is less field-serviceable than the IBM blade, which opens with a touch, and HP's offering with its hot-swap SCSI design.

IBM eServer BladeCenter HS20 and BladeCenter

IBM, the newcomer to the blade-server market, has achieved a higher-density dual-Xeon solution than either HP or RLX has. The IBM BladeCenter enclosure holds 14 server blades in a 12-inch (7U) box with internal power supplies and a built-in KVM. At a net size of 6U to 7U per server, IBM can handle as many as 84 servers in a standard 42U-high rack.

IBM's enclosure includes everything but the kitchen sink, equipped with four hot-swappable power supply bays and four switch module bays, all readily accessible from the rear of the server. The system IBM provided for testing came outfitted with two power supplies and a Gigabit Ethernet switch module, ample for the four server blades. IBM also offers FC and RJ45 pass-through modules.

The enclosure has two large, removable fan modules with louvered doors, so when the enclosure is powered down, you're less likely to get dust or bugs inside. There are no fans within the server blades themselves, which reduces the number of moving parts that might fail to one: the laptop-size hard drive.

Unique among those tested, the BladeCenter enclosure contains floppy and CD-ROM drives, which can be assigned to each of the servers by pressing a button. This provides the simplest, most direct way for administrators to install software onto a blade, without having to use IBM's cumbersome Director 4.1 management software.

IBM offers only one blade model, the HS20, which, at a maximum clock speed of 2.8GHz, runs slightly behind competitors. Each server has two Gigabit Ethernet NICs (network interface cards), in addition to a separate Ethernet interface for its management processor.

The four HS20 servers IBM provided (model 867851X) had dual 2.6GHz Xeon processors with 400MHz front-side bus, a 40GB nonswappable 2.5-inch ATA hard drive, and 768MB of RAM, and were preloaded with Windows 2000 Server. IBM also offers the blade with one or two nonhot-swappable SCSI drives, for a maximum internal storage of 146GB.

The HS20 servers are also the easiest to maintain. The blades are equipped with a swing-up top, which provides unfettered access to all components. Nearly everything is removable without tools, including the hard drive. IBM's LightPath diagnostic system can identify bad components, such as memory chips, even when the server is removed from the enclosure. The servers also have a removable FC daughter card and a second 2.5-inch ATA drive installed in its place.

IBM's management side is weak. Director is several loosely integrated programs (preinstalled onto a ThinkPad notebook provided for this test) that administer the enclosures and blades. These Windows-based tools are functional and make it possible to capture and deploy server images, but the Web-based platform provided by RLX's Control Tower is more robust and easier to use.

HP ProLiant BL20p G2 and p-Class Server Blade Enclosure

Of all the blade systems in this roundup, Hewlett-Packard's system offers the fewest compromises. The flip side is that HP's server blades are larger than those of the competition, negating many of the high-density benefits of blades.

HP's p-Class Server Blade Enclosure is 10.5 inches (6U) high and contains spaces for eight of the BL20p G2 (or BL20p original flavor) blades. It also has two dedicated slots for an RJ45 patch panel or Gigabit Ethernet switches for the blade system.

Already, that's the lowest server density in the review -- but it gets lower. Unlike IBM and RLX, HP uses a separate rack-mount enclosure for its six hot-swap power supplies; the 3U BL p-Class power enclosure can drive three blade enclosures. That is, 24 server blades will use 15U of rack space for only 48 servers in a standard rack.

The individual server blades sport dual 3.06GHz Xeon processors with a 533MHz front-side bus and 512MB of RAM. There are three usable Gigabit Ethernet ports per server, with another Fast Ethernet port for HP's excellent iLO (Integrated Lights-Out) management processor. There is no KVM capability in the HP blades, but iLO can provide a browser-based graphical console over the LAN. HP also offers a four-processor server blade that fits into the same enclosure.

Unique among the blades tested is HP's high-availability storage; there's an integrated Ultra320 RAID controller and two 3.5-inch hot-swap SCSI drive bays in each server. The three BL20p G2 servers were equipped with 36GB drives, set up in a mirrored configuration. A daughter card adds dual FC connections to each server.

Unlike IBM and RLX, the HP blades have active cooling, with four little fans located inside each server's chassis. So if a fan dies, you'll need to pull the server to make repairs, and due to the way the BL20p blades are constructed, it's difficult to replace the fans in the field.

The BL20p G2 blades are monitored and administered with Insight Manager 7 SP2, a fairly nonintuitive set of applications. This version of Insight Manager is slightly more blade-aware than the previous one, and now makes it possible to manage blades according to their physical position within an enclosure. HP preinstalled the Insight Manager server onto a separate non-blade server it shipped to me.

Scripted provisioning of the servers -- such as to push out a software image when a server is hot-swapped -- is handled by HP's ProLiant Essentials Rapid Deployment Pack, a separate application. This functionality should be incorporated into Insight Manager.

Drawing blade conclusions

There are two winners here. RLX is ahead by a nose, largely due to its outstanding management software; and its hardware, although not the best in this test, is very good. The company's innovative LCD management screens, on both the server blades and the enclosure, add to its management capabilities. Choose this system if you're looking for a high-density server-blade system that's extraordinarily easy to administer, either remotely or by a local staff not dedicated to the task.

IBM's hardware is tops for rack density with 84 servers per rack, would be the easiest to service in the field, and its enclosure's built-in KVM and CD-ROM/floppy drives make these servers act like, well, regular servers. This system would be best for companies with large datacenters that are seeking to save space. IBM would also work for small businesses that don't use any formal server-management tools, but simply want the space savings that a blade system can offer.

Although HP's blade system has the lowest blade density of the three, it does offer hot-swap SCSI drives; and HP has demonstrated that it can expand the blade concept to four-processor systems. If you're an HP shop, and if the primary advantage you seek is improved hardware deployment over a standard 1U server, HP's system is worth closer examination.

Blade developers must continue to add performance, manageability, and serviceability to their blades. There's still room for improvement. Down the road, blade servers should begin to become standardized, not only in form factor, but also in interconnects, power, expandability, and manageability. But it's too soon for that now.

Just as storage virtualization and SANs changed the way we think about hard drives, server blades have the potential, some day, to alter how we define servers. Will everything eventually be a blade? No, because blades are designed to be disposable; if one breaks, swap it out. That doesn't fit all applications. There will always be a need for servers that contain a great deal of internal storage or that are designed with bulletproof, nonstop, high-availability features that are diametrically opposed to the blade concept. -- InfoWorld (US)

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.
Show Comments

Market Place