Using PCI Express and Non-transparent Bridging in Blade Servers
Blade servers are growing greatly in popularity because of the huge increases in packaging density and cost reductions possible with this technology. As blade servers transition from 1 and 2 Gbit/sec backplane technologies to the 10 Gbit/sec level, a race is on to determine which serial fabric protocol will be chosen for this system interconnect task. Although not the incumbent, PCI Express has the advantage of being PCI compatible and a native interconnect technology on the latest server chip sets. Nontransparent bridging gives PCI Express the ability to function in multi-host systems, making it potentially the lowest cost and highest performance blade server backplane alternative.
A blade server system is a relatively homogenous collection of processor and I/O modules that may extend from a single shelf to multiple cabinets. One of the challenges in blade systems is to remove the network and storage I/O from the processor modules for economy and increased packaging density without requiring complete revamping of the I/O software and device infrastructure or an I/O blade dedicated to each processor blade. This article shows how PCI Express combined with non-transparent bridging can provide the needed connectivity, and then outlines an approach to dealing with the associated software issues. It also shows fabric topologies that support typical blade server backplanes and are implementable with first generation PCI Express switches.
Please disable any pop-up blockers for proper viewing of this Whitepaper.