Clustering performance

Permabanned
Joined
8 Mar 2003
Posts
4,055
Location
Looking at the internet
I'm in the middle of a storage upgrade project removing legacy SAN stuff and replacing with Dell Powervault 220s. The PV220s are loaded with 14 15k 300GB SCSI drives and put into a cluster with 2 Dell Powervault 6450 running Windows 2003 Enterprise server. The controller cards are Perc 3/DC's.

I've got everything running fine, but the performance doesn't seem to be that great. Clustering forces you to use WRITETHRU within the Perc BIOS which I know is slower than WRITEBACK, but even so, I'm only getting speeds of 5-8MB/s which seems really slow to me.

Does this sound right to you guys, or could a be missing something?

EDIT : Forgot to say, I'm using RAID5 and have updated to the latest BIOS's for the servers/percs etc. Also, I've done 2 sites (with identical kit) so far and this seems to be the same for both, so I don't think its a fault.
 
Hi, I don't really have much experiance in clustering but I would agree that is very low, our sas 15k seagates in RAID 5 non cluster can sustain around 90mb/sec

I wouldn't of though being WRITETHRU would slow it down by so much
 
This probably belongs in the storage section of the forum, how many spindles do you have in total and how are you measuring the speed? Raid5 is also one of the slowest raids round with Raid 1+0 being much faster but obviously at the cost of disc space.
 
Noxis said:
This probably belongs in the storage section of the forum, how many spindles do you have in total and how are you measuring the speed? Raid5 is also one of the slowest raids round with Raid 1+0 being much faster but obviously at the cost of disc space.

Yeah I did think about putting it in storage, but I've noticed the sys admin types tend to hang out more in here.

Got 14 300GB disks in there (13 in the RAID5 arrays and a hotspare). Details of the PV220 here - http://www.dell.com/content/products/productdetails.aspx/pvaul_22xs?c=us&cs=28&l=en&s=dfb

I'm using an app called viceversa which mirrors data that is reporting the transfer rate, but I can check the MS performance monitor to make sure its not being misreported I guess.

Thanks for the help so far.
 
starscream said:
Yeah I did think about putting it in storage, but I've noticed the sys admin types tend to hang out more in here.

Got 14 300GB disks in there (13 in the RAID5 arrays and a hotspare). Details of the PV220 here - http://www.dell.com/content/products/productdetails.aspx/pvaul_22xs?c=us&cs=28&l=en&s=dfb

I'm using an app called viceversa which mirrors data that is reporting the transfer rate, but I can check the MS performance monitor to make sure its not being misreported I guess.

Thanks for the help so far.

Unfortunately I am a Unix syadmin so cant really help that much with testing it under windows, however the theory would be the same.

What I would do in your case for sheer throughput is to dd /dev/zero into a ******* large file on the storage mount as this would incur nearly zero cpu time but maximum IO. It would only be write speed you are testing but it would be at the maximum performance possible of the disks as it would be a bunch of zeros. This would give you a speed indication of the the interface/bus.

How you do that in windows tho I dont know. A simple and possibly not *that* reliable program (never tried it on enterprise size solutions, only my raid0 at home) but worth trying is one called HDTach.

If this is not in service yet i'd also test it with Raid 1+0. You could also look into what size files are going to be stored on the JBOD and adjust block size accordingly.
 
Back
Top Bottom