San decision help!!

So he is, completely misread the convo. Ignore that then.

On topic: 80MB/s seems a bit slow for a 12TB SAN, even on 7200 drives and RAID5 it'd be looking for 120+

You using 10GbE for the Switch fabric? of got jumbo frames enabled on gigabit infrastructure?
I've got vSphere5 on an R710 connecting to a 2TB Equallogic SAN over gigabit using the software iSCSI initiator, that shifts hundreds of MB/s over CIFS.
Granted it's using 15k drives but it's got a lot less of them (RAID10 so only actually striping over 7 of the 14 disks), I wouldn't have expected such a huge difference. Especially as the EMC backplane is probably better than the EQL one.

If I were you I'd give iSCSI another go. It is a bit of a bugger to set up all the bindings and the MPIO but it should perform very well. As you're a .ac.uk then it's probably a lot less of a pain because you're likely to be a lot more at home in a Linux CLI than me :)
 
So he is, completely misread the convo. Ignore that then.

On topic: 80MB/s seems a bit slow for a 12TB SAN, even on 7200 drives and RAID5 it'd be looking for 120+

You using 10GbE for the Switch fabric? of got jumbo frames enabled on gigabit infrastructure?

80MB/s was on the storage chassis attached to it using 7.2k disks on RAID6 (only 6 x 2TB disks, so nowhere near enough spindles for high performance)

This is the worst case in our network, I didn't see the point in exaggerating the truth to help someone.

Our VMs are running on 15k RAID10 over 12 spindles in the top chassis. Its dedicated to the VMs on NFS so I cant give you a number for CIFS.

Box has 4Gbps Etherchannel for CIFS & 4Gbps for NFS, uplinking to 2 Cisco 3750G's

We are only using 1GbE switches here. Plenty fast. People bragging about 10GbE in their network are usually very mislead individuals. Normally followed by "The netgear switches are really quick...." :rolleyes:

EDIT: The whole reason for a box such as this doing CIFS is to get rid of the fileserver in the network. Its just another thing to fail. Precisely why we are not using iSCSI to a fileserver then CIFS to the clients. I also dont understand what you mean by .ac.uk? We dont have a domain with that TLD.
 
Last edited:
[Darkend]Viper;21676181 said:
We've got 700 users, 440ish machines, and high expectations!! They expect sims to load instantly!!!

SIMS and instantly also don't belong in the same sentence. Though this is more down to the client machines than the SQL server in my experience.
 
80MB/s was on the storage chassis attached to it using 7.2k disks on RAID6 (only 6 x 2TB disks, so nowhere near enough spindles for high performance)

This is the worst case in our network, I didn't see the point in exaggerating the truth to help someone.

Our VMs are running on 15k RAID10 over 12 spindles in the top chassis. Its dedicated to the VMs on NFS so I cant give you a number for CIFS.

Box has 4Gbps Etherchannel for CIFS & 4Gbps for NFS, uplinking to 2 Cisco 3750G's

We are only using 1GbE switches here. Plenty fast. People bragging about 10GbE in their network are usually very mislead individuals. Normally followed by "The netgear switches are really quick...." :rolleyes:

EDIT: The whole reason for a box such as this doing CIFS is to get rid of the fileserver in the network. Its just another thing to fail. Precisely why we are not using iSCSI to a fileserver then CIFS to the clients. I also dont understand what you mean by .ac.uk? We dont have a domain with that TLD.

I wasn't saying you should use 10G or that 10G would fix anything (though it does make sense in a lot of places despite people's misgivings about it). Quite the opposite, I was pointing out that I'd expect an array like that to max out gigabit with ease, not top out at 80MB/s. I have an HDS SMS100 NAS/SAN unit (which I openly concede is a pile of cack in every sense) running 12 7200 nearline disks, 2x RAID6 arrays, and it saturates 1GbE @ ~120MB/s. Infact i'd fully expect 2 Enterprise SATA drives in RAID1 to equal if not better 80MB/s.

Also don't write off a fileserver as a pointless middleman without actually benchmarking it. Server 2008 does a lot of caching on shares that the SAN won't do that will reduce the load on the SAN and increase end user performance. In a VM, this is highly resilient having been free'd up from hardware failures. Though resilience is a moot point if you're running a single SAN array for it as that is in itself a single point of failure. I have Apps that insist on dishing their binaries up out of shares across the network, i can download 130MB binaries to several clients and only see one 130MB trasfer from server to SAN. If you have 200 users all firing apps up 9am on a monday this really does make a difference.

Also iSCSI affords benefits of it's own such as multipath I/O. Ethechannels are fine for the aggregated bandwidth but don't offer multiple kernel I/O queues -> these are much more useful than raw bandwidth for getting the most out of your IOps. I've benched both and 4 descrete NICs running multipath iSCSI performs lots better than teaming at layer2, be it static, etherchannel or LACP on an l3/4 hash. In networked storage aggregating network bandwidth at lower layers can actually hurt performance rather than improve it.
 
Last edited:
Fair enough I tried not to sound like I took it to heart ;)

TBH I think the bottleneck in my test would be my workstation... Yes it has a gigabit link but it still only has a single 7.2k SATA II disk in running Windows 7 at the same time. I wouldn't expect much more from it on a large sequential write tbh.

I agree that MPIO does work well for increasing iSCSI performance but during tests NFS came out on top.

Still think for the cost it is a very good piece of kit.
 
Back
Top Bottom