Menu
Menu
Product Review: HP blade servers

Product Review: HP blade servers

We looked at two server blade offerings from Hewlett-Packard - the heavy-duty BL p-Class and the lower-end BL e-Class servers.

Let's face it - marketing hype surrounding new product offerings has lost its appeal. Most techies realize their jobs are to solve problems, not be the first to adopt a nifty gizmo into their IT infrastructures. So let's lose the hype and consider the basic benefits that server blade vendors pitch - smaller size, easier management and fewer cables. If these expectations are met and performance and usability don't suffer, server blade systems could be valuable tools in IT data centers. We looked at two server blade offerings from Hewlett-Packard - the heavy-duty BL p-Class and the lower-end BL e-Class servers. The p-Class blade system scored a 4.3, based on its great management and serviceability features, plus strong performance. The e-Class scored a 3.7, but we felt that HP cut some corners with the e-Class features.

In the HP blade system, each blade is a stand-alone server with its own processors, memory and hard drive(s). The blades connect to the chassis backplane for power and network connections (and, for the e-Class, chassis-oriented management).

The p-Class

The p-Class blade server, designed for compute-heavy applications such as database or dynamic Web applications, consists of a 6U-high (9U if you count the power supply unit) chassis that holds a maximum of eight two-processor blades.

HP also has a four-processor blade option that takes up two of the eight slots, for a maximum of 20 processors. Ours shipped with two dual-processor blades. The G1 version had two 1.4-GHz Pentium III processors and ran Windows 2000; the G2 had two 2.8-GHz Xeon processors and ran Linux.

On performance, the p-Class scored well (4 out of 5). The number of compute-heavy Secure Sockets Layer and file I/O-heavy non-SSL Web transactions were expected from the amount of computing power in the blades. Our file and computational performance tests showed good scalability between processor types.

The larger file-size tests showed that both blade types have an 89M bit/sec network performance limit, while their network interface cards (NIC) are configured for 100M bit/sec full duplex operation. This limit is expected, considering the amount of I/O and TCP operation the blade must service.

The p-Class earned a high mark for features and flexibility, having the necessary features needed for most enterprise applications. It also earned raves for its hot-swappable hard drives (something not on the e-Class). This gives users flexibility in choosing the speed and size of the drives, and the choice of either one or two drives. Another added benefit is the ability to connect a p-Class to a storage-area network.

The G2 blades have three 10/100/1000M bit/sec Ethernet NICs, while G1s have three 10/100 NICs. The power supply can power up to three blade enclosures. The power supply unit is fed from two 208-volt AC circuits, which are converted into two 48-volt DC power feeds. The p-Class blade enclosure can run off one 48-volt DC power feed. The second feed is for redundancy. Each 48-volt DC feed has three hot-swappable power supply units in a redundant load-balancing configuration.

For management, each blade has its own set of HP's Integrated Lights Out (ILO) hardware. ILO provides access to the blade, even when the blade's operating system is powered down. The blades have two switches in a redundant configuration.

The p-Class was very easy to manage. It had fewer tools necessary to manage the system than the e-Class. The tools were easy to use and worked as expected. However, there were several tools that didn't have a unified user interface - this caused some confusion as to which tool should be used for different functions.

The jewel of the management tools for both server blade systems was HP's Rapid Deployment Pack. This software lets an administrator do a fresh installation of Windows 2000 or Linux, save and reload an operating system partition image, or make configuration changes to multiple blades at once. The software was very simple to use, which makes easy work of deploying new blades or replacing failed blades. The Rapid Deployment Pack is being expanded to all HP server lines. Rapid Deployment Pack is priced by server and is included with the e-Class and p-Class enclosure.

Its hot-swappable removable hard drives gave the p-Class a boost in its serviceability score. All the blade components (on both systems) were easy to remove and replace. The enclosure assemblies of both servers were well laid out, and it was easy to get to the components, such as fans and power supplies.

The e-Class

The e-Class is a 3U chassis that can house a maximum of 20 single-processor blades per chassis. Our e-Class was shipped with four single-processor blades. Two of the blades (G1 version) had 800-MHz processors; the other two (G2 version) had 900-MHz processors. Win 2000 and Linux were loaded on both blade versions. The e-Class blade is designed for front-end Web services and single-purpose light-load services, such as DNS and Dynamic Host Configuration Protocol (DHCP).

Features of the e-Class include two built-in redundant, hot-swappable, load-balancing 120-volt AC power supplies. The chassis has a centralized management chipset instead of each blade having its own management hardware. Our e-Class chassis was shipped with two Gigabit Ethernet switches built into the enclosure. These switches aggregate the 10/100 Ethernet connections from the chassis blades to one Gigabit Ethernet connection from each switch to the network. All Ethernet interconnections are internal to the enclosure. The switches have cross-connect ports to provide redundant connections to the network from any blade. This redundancy feature worked fine. If a customer doesn't want a Gigabit Ethernet switch, HP offers an RJ-45 and an RJ-21 patch panel option for connecting individual blades to a network.

The hot-swappable e-Class blades have one Ultra-Low Voltage Pentium III processor. The 800-MHz blade has one fixed 40G-byte Ultra ATA/100 4,200-RPM hard drive. The 900-MHz model has a faster 5,400-RPM drive. Other than the processor and hard drive, the two blade models are identical. The blades come with two on-board 10/100 Ethernet NICs.

For management, the e-Class server includes an additional Integrated Administrator tool, which handles remote access to an e-Class blade's operating system. Insight Manager is used to monitor e-Class and p-Class blades. The Insight Manager can detect and report system hardware and operational faults. The Integrated Administrator application was easy to use, but it initially was confusing whether to use Integrated Administrator or Insight Manager. The Insight Manager monitors both blades' server hardware, including blades, switches and supporting services (and Remote Desktop Protocol, DHCP and DNS servers). Insight Manager has a discovery feature that finds all network components on the network.

Keyboard, video and mouse (KVM) functions of both blade systems are handled through remote access to the server. The e-Class blade has a large dongle that plugs into the front of the blade for KVM connections. The limitation is the dongle can only plug into one blade at a time, and the dongle physically has to be moved from one blade to another. Both blade systems can be turned on and off remotely through the management tools or can be powered off and on from the front panel of the blade.

Conclusion

Both servers offered impressive performance and management features in well-designed enclosures. The big question, is how will a blade server help you? If rack space is hard to come by and you have lots of small single application servers, then blade servers are worth considering for your company. Otherwise, the need for a blade server might be questionable. But the future could be very promising for blade servers if other network and data services could be integrated into the blade enclosures to help offload blade server functions.

Bass, a senior technical staff member at North Carolina State University's Centennial Networking Labs (CNL) in Raleigh, N.C., and co-author of McGraw Hill's Building Cisco Multilayer Switched Networks, designs and leads the execution of the test suites. He can be reached at john_bass@ ncsu.edu. Sangram Kadam, Piyush Raju and Aditya Shringarpure assisted with testing. Server testing is performed at CNL, which tests networking equipment and network-attached devices for interoperability and performance. -- Network World (US)

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.
Show Comments