Server Blades/Enclosures

Associate
Joined
27 Sep 2005
Posts
103
Hi just trying to understand more about blades and enclosures as far as i understand you have enclosure's which provides Power/Cooling to the blades and the blades connect/slide into to the Enclosure using the Mid plane.The blades themselves hardware wise have Mobo/CPU/Memory/SAS or SATA Hard disks/Optical drive ?? do they have any other hardware ? eg NIC, RAID Controller?

What confuses me is how does networking work on Blades/Enclosures ? is there a RJ45 port on the back of the enclosure for each NIC on each blade? which a cat5/6 connects to the Central connecting device (Switch) ??

I have also heard terms lilke Virtual connect / Flex 10 mentioned what are these?

Im confused !!
 
Last edited:
I only know about IBM BladeCenters but they are probably similar to others.

Each blade has a motherboard, CPU, RAM, networking and hard drives etc. The connectors on the back are non standard connectors. These connectors are what carry all the signals on and off the blade, through the midplane.

On and IBM BladeCenter H you insert switches from the reverse of the midplane which then provide all the network connectivity via the various "fabrics" that are on the midplane. IE if you put a 1G switch in to I/O slot 1 then your blades network card that corresponds to that I/O fabric gets a connection. These switches all tend to have external ports (with standard RJ45, SFP+, XFP connetors on them) for external connectivity
 
In the case of HP blades they have NICs built onto the systen board and extras can be added using PCI mezzanine cards

They patch through to the back either through a pass through module ( straight through) or using Virtual Connect modules which give you control over the way the conenctions all work
 
http://vido.com.ua/upload/uploaded_images/28500/28757img_7677.jpg

This is the rear of a Gen8 HP blade. See that big rectangular connector dead centre with lots of little holes? That's where everything on the blade connects to the midplane of the chassis. The bigger holes directly to either side are for power.

http://vido.com.ua/upload/uploaded_images/28500/28758img_7679.jpg

This is the front. The only thing you can hot-replace are the two hard drives (this particular photo has blanks instead of actual drives). For anything else, you have to pull the blade out of the chassis (instantly powering it off).

http://vido.com.ua/upload/uploaded_images/28500/28754img_7674.jpg

This is the inside of the blade; the front of the blade is on the left, the rear of the blade is on the right. See on the right side (the rear), there are two dark-grey rectangles with lots of pins? Those are for plugging cards in (called mezzanine cards). You can have two cards, one with two ports, and one with 4 ports, and then the motherboard has 2 ports on board. So there are a total of 8 ports, which map to 8 slots at the rear of the blade chassis, like this:

1 2
3 4
5 6
7 8

http://h30499.www3.hp.com/t5/image/...9D160DAB86/image-size/original?v=mpbl-1&px=-1

This enclosure is damaged after a tornado, but it's a nice big picture to give an idea of what is contained at the rear. From top to bottom:

5 Fans
2 Gigabit switches (Bays 1 and 2)
2 Fibre Channel switches (Bays 3 and 4)
2 Gigabit switches (Bays 5 and 6)
2 Gigabit switches (Bays 7 and 8)
2 Onboard Administrators (the thing that manages the enclosure, one active and one redundant)
5 Fans
6 Power Supply connectors (the actual power supplies are at the front).

So...

The onboard card of each blade maps to Bays 1 and 2, and is always network, but can be e.g. GigE or 10G.
Mezzanine slot 1 has 2 ports and maps to Bays 3 and 4, and can be anything you want, but the convention is usually that this is reserved for Fibre Channel.
Mezzanine slot 2 has up to 4 ports, and maps to Bays 5 - 8, and can be anything you want, but again would generally be network (could be GigE, 10G, Infiniband, etc).

I hope that explains things a bit better.

Some of the benefits of blades are the simplicity of cabling (very few cables coming out the back of the cabinet, compared to the many dozens you would have for the same number of rack-mount servers), reduced power and cooling (because it is shared across numerous blades, it is a lot more efficient), ease of management, ease of provisioning (once an enclosure is racked and cabled, it is dead easy to slide a blade into an empty slot).
 
Back
Top Bottom