iscsi using HBAs or standard Gbit NIC?

Soldato
Joined
26 Nov 2002
Posts
6,852
Location
Romford
Hi

I'm wondering if there's any real benefit of purchasing dedicated Gbit HBA's for iscsi connections, or just use free onboard/riser Gbit NICs that comes with the server?

Cheers
 
I tested it a couple of years ago - basically there's maybe a tiny benefit if the server is running flat out 100% CPU all the time. Otherwise forget the HBA and use a software initiator. And if it is that busy, forget iSCSI and get fibre channel
 
bigredshark is spot on, apart from the last bit - just because your CPU is maxxed out, doesnt mean you need FC - you need more CPU capacity or an HBA to aleviate it. In ESX the overhead of the software initiator is about 2~4% so negligible really.

A hardware HBA gives you the benefit of better multipathing and the ability to boot from the SAN
 
I've been pondering using iSCSI for a while at work using the VMWare software initiator. Does it scale ok beyond 1Gb? I was thinking of teaming 2-4 network adapters as we have some fairly intensive IO (but sadly not the buget for FC)

akakjs
 
ESX 3 didn't allow you to multipath iSCSI, not sure if this has changed with Vsphere 4.

What sort of throughput do you need, I think when I was benchmarking it you could get around 1-200 megabytes / second using jumbo frames?
 
ESX4 has pretty poor multipathing on iSCSI from what we have tested at work. Doesn't always seem to do much. It does work but not amazingly!

ESX is also a bit iffy with some SANs too.
 
The latest Intel Gigabit ET NIC's have some capability for working with the newer Virtual I/O features and looking online are cheaper to buy then the older Pro/1000 cards, and offloading work from the CPU.

They aren't dedicated HBA's for iSCSI but probably not a bad compromise.
 
ESX 3 didn't allow you to multipath iSCSI, not sure if this has changed with Vsphere 4.

What sort of throughput do you need, I think when I was benchmarking it you could get around 1-200 megabytes / second using jumbo frames?

You can't multipath for increased throughput but you can assign multiple NICs to the vSwitch for resilience - works fine for our needs as we're nowhere near using 1gbit bandwidth
 
Back
Top Bottom