SAN question

Soldato
Joined
31 Dec 2003
Posts
4,750
Location
Stoke on Trent
hi all,

i'm a newbie when it comes to sans but what i'm trying to get my head around is the bandwidth that runs between the SAN and the server to which it's attached.

If we were to have an iSCSI SAN with 2 x 1Gb/s ethernet leads going from the SAN to the server, I'll get a maximum theoretical bandwith between the two of 2Gb/s = 250MB/s.

Now I've read that SAS drives give a throughput of 300MB/s, so what I can't understand is if one of the many SAS drives was being hammered at 100%, surely the link between the SAN and the server won't be fast enough to realise the disk's max throughput?

Can someone advise please?
 
Basically, you'll never get 300MB/s from a single drive but even if you could it just underscores that if you want performance you don't use ethernet, you use fiber channel for your SAN...
 
See that's what I've always just automatically thought, on the basis that Fibre @ 8Gb/s > any ethernet speed that is affordable...I'm getting some quote at the moment to see just how much I'd save with iSCSI over FC.

I still can't quite satisfy my own mind though, because if there are for example 12 hard discs, the maximum bandwidth from them is surely 12 x 300MB/s, which is far more than can be handled by even FC?
 
Last edited:
Anything is possable if you have the money. Datastorage normally runnies in to larges sums of it.

I wouldnt get too hung up on the sequential reads and writes, as most of the data thats read and writen to the SAN would be random. There are a number of factors to take in to account like RAID config, FC / iSCSI, Filer make and software etc....

You can get 10Gb over iSCSI now. However, 2 tunneled ethernet connections over iSCSI would be able to deliever over 300MB/s with ease.
 
Last edited:
You also need to remember that enterprise SANs are rarely about throughput, IOPS is the limitation usually, not raw throughput.

Another point is that 12 disks x 300MB/s (you'd be lucky to get 200MB/s sustained really) is also far more data than any single system could realistically process in real time.

I've run big exchange installations (1000 user +) on 2x1GBit iSCSI and the throughput was never a problem (IOPs were, even with 120 odd SAS disks).

Fibre is the way to go for really high end installations (not least because FC is designed for storage traffic and ethernet really isn't) but most people don't *need* it.
 
We recently went through a virtualization process which included the purchase of a SAN.

After some back and forth from the vendors we were able to get a fibre EVA4400 from HP cheaper than we cold get an iSCSI equilogic from Dell.

Performance has been fantastic.
 
On the subject of IOPS, on average you can expect around 160 IOPS ~ per 15k disk, 140 IOPS ~ 10k disk. The vendors also normally supply there equiment with cached IOPS ratings as well, but in the real world this are pretty much usless. The level RAID level as well can also have a big impact on the IOPS, as example RAID 5 would effective half the IOPS and RAID 6 has even more of a impact.

Then you have to then about response times as well, if you going to be using LUNs for Exchange for example where your going to need a sub 20ms responce time. Otherwise exchange gets a little bit upset and i have seen that happen :(

Andy
 
On the subject of IOPS, on average you can expect around 160 IOPS ~ per 15k disk, 140 IOPS ~ 10k disk.

Your numbers don't sound right. There should be a 50% increase in IOPS from 10k to 15k. Max/average IOPs are basically linearly proportional to RPM of the disk.

Numbers I've seen are:-

15K 3.5" SAS ~150 IOPS
10K 3.5" SAS ~105 IOPS
7.2K 3.5" SATA ~70 IOPS
10K 2.5" SAS ~120 IOPS
 
Really, there aren't going to be too many benefits to FC over iSCSI. At work we have tried both and never got any real differences. iSCSI just makes life easier because of cabling etc. Its probably personal opinion though! 10Gbit iSCSI will kick FC though :P

You can get a read of 300MB/s on a SAS 4 disk RAID10. Not sure how this would work in the real world though. I have done the 300MB/s but not tested for long periods of time.
You want more spindles, more spindles give better speeds. You could even use SATA disks if you can get enough of them :)

Depends what you need it for.

:)
 
That might be true for small business but it's just not true in the enterprise world, FC has proven time and time again to to be more reliable, when we moved our overnight replication jobs from iSCSI to fibre channel (after 3 months of campaigning) we got an order of magnitude less failures straight off. We were seeing maybe 40-50 jobs (out of maybe 1100) generate errors on an average day, these days it's unusual to see more than 10 and no errors isn't unheard off. The fact is until datacenter ethernet turns up, ethernet networks just aren't designed to handle storage traffic and FC was designed from the ground up for it.

I fully agree it's likely to be irrelevant for a small business but that doesn't mean there's no difference, enterprises don't use FC for new installations because they enjoy spending money!
 
bigredshark hit the nail on the head, how can you compare FC with iSCSI and say that FC cannot perform?
iSCSI is just cheap and ideal for small/ medium size businesses but for enterprise class i would not go iSCSI for central SAN. Where i work its FC all the way even on new installations, it might be costly but you benefit in the long run
 
I see thanks.

A few more if I may : If I was to get dual controllers, can one controller only see one set of discs, and the other controller see another set? Or can dual controllers both see all available discs?
 
Depends on how you set it up and the requirements. You can have one controller 'see' one chunk of disk/storage, or you can have both controllers see all the disks. Our NetApp's currently are set to see all disks so that if the primary controller fails the secondary kicks in. I have also managed setups where the disks are split across the controllers and each controller has it's own disks.

We have it on ours NetApp's setup so that both controllers can see all the disks.
 
FC isn't much better than iSCSI when you're talking SAS or SATA drives. It does come into it's own on the higher end HP kit and IBM kit where you start putting FC native disks in. You then in theory get a 4 or 8gb/s fibre pathway from the server NIC to the disks themselves. But this is only needed where latency is a major major must, and obviosuly costs skyrocket. For 300GB 4gbFC IBM branded disks you can pay upwards of £1000 per disk. But FC is losing it's place in the lesser intense Enterprise SAN as 10GbE is now getting quite affordable. Latency is getting very low on iSCSI SANs now aswell as the hardware and firmware are maturing.
 
thanks for that, that's interesting. Did you find a solution to that?

More disks! I think we ended up with 18 disk enclosures for exchange, 12 disks each for 180 disks total, some of those were for log volumes but it's still a lot of disks and IOPS.

The main problem there was that company was it's love for blackberry's and BES consumes IOPS, a heavy outlook user will need about 1 IOP, a blackberry could need 3 IOPS so suddenly your solution needs to do the equivalent of supporting triple the number of users.
 
FC isn't much better than iSCSI when you're talking SAS or SATA drives. It does come into it's own on the higher end HP kit and IBM kit where you start putting FC native disks in. You then in theory get a 4 or 8gb/s fibre pathway from the server NIC to the disks themselves. But this is only needed where latency is a major major must, and obviosuly costs skyrocket. For 300GB 4gbFC IBM branded disks you can pay upwards of £1000 per disk. But FC is losing it's place in the lesser intense Enterprise SAN as 10GbE is now getting quite affordable. Latency is getting very low on iSCSI SANs now aswell as the hardware and firmware are maturing.

I strongly disagree with this, you're thinking in too limited a way about it. Bear in mind that ethernet will basically drop packets whenever it feels the need - that's a problem for storage traffic, FC will only ever drop traffic as a last resort, if you're running it beyond a few local rack's that's an issue (I appreciate it's a enterprise application but it's still relevant).
 
This is where SSDs are coming into the scene. Lots of the newer SSDs although capacity isn't as great they're doing massively more IOPS per disk than mechanical drives. I don't think they're a proven tech yet but where you need IOPS over throughput and capacity they might take off!
 
Back
Top Bottom