I'm wondering if there's any real benefit of purchasing dedicated Gbit HBA's for iscsi connections, or just use free onboard/riser Gbit NICs that comes with the server?
I tested it a couple of years ago - basically there's maybe a tiny benefit if the server is running flat out 100% CPU all the time. Otherwise forget the HBA and use a software initiator. And if it is that busy, forget iSCSI and get fibre channel
bigredshark is spot on, apart from the last bit - just because your CPU is maxxed out, doesnt mean you need FC - you need more CPU capacity or an HBA to aleviate it. In ESX the overhead of the software initiator is about 2~4% so negligible really.
A hardware HBA gives you the benefit of better multipathing and the ability to boot from the SAN
I've been pondering using iSCSI for a while at work using the VMWare software initiator. Does it scale ok beyond 1Gb? I was thinking of teaming 2-4 network adapters as we have some fairly intensive IO (but sadly not the buget for FC)
The latest Intel Gigabit ET NIC's have some capability for working with the newer Virtual I/O features and looking online are cheaper to buy then the older Pro/1000 cards, and offloading work from the CPU.
They aren't dedicated HBA's for iSCSI but probably not a bad compromise.
You can't multipath for increased throughput but you can assign multiple NICs to the vSwitch for resilience - works fine for our needs as we're nowhere near using 1gbit bandwidth
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.