Fibre kits for a SAN

Man of Honour
Joined
17 Nov 2003
Posts
36,746
Location
Southampton, UK
So we're looking at getting a low end SAN for a VM host cluster and we're looking at the option of fibre over copper. I've not specced this sort of thing up before. Can someone give me some advice on what to look for?
 
Not really done fibre for a SAN before, but use fibre in the networking, so you will need a Fibre SWITCH, SFPs (single mode or multi mode i dk) and Fibre patch leads, and a HBA card for the hosts/Controller.

Sorry for the useless help, but have you tried talking to a reseller about your requirements? they would be able to advise you better, even if you dont buy from them you will have an idea of what you need.
 
Sorry for the useless help, but have you tried talking to a reseller about your requirements? they would be able to advise you better, even if you dont buy from them you will have an idea of what you need.

Doing that as well, but was hoping to harness the hive mind too :)
 
I would think an iSCSI SAN would be more cost effective?

How many hosts/WM are you looking at?

GJUK
 
Nunzio has pretty much summed it up.
You may find depending on the SAN that the Fibre switch is built in to it.

On higher end stuff the fibre switch is a seperate unit, which gives you more scope for expansion and allows for resilliency.
 
When you say FC over copper, do you mean "TwinAx" style cable or FCoE? (which can be copper over 10GbaseT, Copper over TwinAx or fibre!)

This is fairly fundamental stuff for SAN connectivity choices.

If you want to do this fairly cost effectively as a "my first SAN" then I'd go for 4Gb FC (fibre) as this is the most established method out there at the moment and you'll be able to get the kit fairly inexpensively.

If I was deploying a new SAN for VMWare now I'd be doing it all over converged networking with 10Gbit CNAs, using either FCoE or iSCSI for boot-from-SAN and NFS for the VMFS storage. Multi-protocol storage architecture across converged networking infrastructure is the future.
 
I'd go for a Dell EqualLogic system, it's 10Gb iSCSI over fiber and has all the plugins for VMWare. Prices start pretty low and can scale up well. We use one of the low end ones for our low intensity test environment with about 20 VM's, it's very reliable. Plus, in future we can scale it up to 16 shelves with SSD drives if necessary. You'll need a 10Gb switch for the storage though as on all my clusters we have dedicated storage and data networking and not had any problems.
 
My view remains that iSCSI still randomly sucks a bit too much to be used for critical or high performance systems. I'm (blowing my own trumpet) a very good network architect and work with a very good systems team and there are times we just can't get iSCSI to perform to as we want. It's not about bandwidth, we're using 10GigE LAGs and still we're just not seeing disk access numbers we would like. Days spent tweaking with the SAN vendor and it doesn't cut it. Throw in fibre channel and all is well.

Not to say iSCSI doesn't work just fine sometimes and in small businesses it's a serious value thing but I've seen it not work too many times and it's not obvious stuff and it's not cheap hardware we're using. Fibre Channel *always* works, which makes it worth the money in my book.

That said, with about 39 PB of storage deployed worldwide now and some serious IO requirements, we're not what you'd call average...
 
What scalability do you need? If it's small (ie 4 or 8 hosts) you could look at a SAN with SAS connectivity. The HP MSA P2000 G3 and Dell MD3200 (aka IBM DS3500) have 8 x 6Gbps SAN ports, so 4 hosts with redundancy or 8 with one link per host.

There's some useful info on using the MSA P2000 G3 here
 
My view remains that iSCSI still randomly sucks a bit too much to be used for critical or high performance systems. I'm (blowing my own trumpet) a very good network architect and work with a very good systems team and there are times we just can't get iSCSI to perform to as we want. It's not about bandwidth, we're using 10GigE LAGs and still we're just not seeing disk access numbers we would like. Days spent tweaking with the SAN vendor and it doesn't cut it. Throw in fibre channel and all is well.

Not to say iSCSI doesn't work just fine sometimes and in small businesses it's a serious value thing but I've seen it not work too many times and it's not obvious stuff and it's not cheap hardware we're using. Fibre Channel *always* works, which makes it worth the money in my book.

That said, with about 39 PB of storage deployed worldwide now and some serious IO requirements, we're not what you'd call average...

I agree that if you want balls-out performance then Fibre is the way to go, but I think that's out of the OP's budget! FCoE is a bit of a PITA to set up from what I've seen whereas iSCSI is easier and cheaper.
 
I agree that if you want balls-out performance then Fibre is the way to go, but I think that's out of the OP's budget! FCoE is a bit of a PITA to set up from what I've seen whereas iSCSI is easier and cheaper.

Probably, the suggestion of SAS for small scale solutions is a good one that hadn't occured to me. I'd actually say that those SAS solutions make iSCSI a little niche. Small solutions will be done cheaper with SAS, anything bigger, well FC is significantly better for not that much price premium.

FCoE is, IMO, entirely pointless. It's only much use combined with converged switch fabrics like the Cisco Nexus or Juniper Q-Fabric. Those are still really specialist product and most people are better off with separate fabrics for FC and IP right now, even at the high end. That will probably change in the future, as they get cheaper and catch on a bit more (though doubtless the Nexus will continue to suck).
 
Back
Top Bottom