Need some help with Slow RAID5 Performance

Associate
Joined
20 Apr 2006
Posts
2,037
Location
Leeds, UK
Any suggestion on the following?

Hardware:
Dell PERC5/i SAS RAID Card
4x Samsung F3 500GB Single Platter Drives.

I'm getting (what I consider) slow Read speeds on this 4 Disk Array, when configured into RAID5.

I did a series of single Disk and RAID0 tests and got scaling performance like I'd expect to see - with a 4 Disk RAID0, stripe size set to 128 and a 256KB block test under HDTune I was seeing massive Read Speeds (450MB+ average read and above, and write speeds were very good too).

However, in RAID-5 I'm seeing Read performance significantly less than expected. The configuration was:

Adaptive Read-Ahead (Tried set to Always Read ahead as well)
Write-Back method (Write through is a no-no)
128 Stripe Size
Direct-IO (Tried set to Cached IO as well)

At first average Read speeds were low down in the 140MB/s region, after doing a bit error checking and event log reading I figured out the 'fast' parity initialisation wasn't much cop, so rebooted into the Card BIOS, recreated the RAID-5 array and set off a full parity initialisation. After this completed, HDTune reports around a 210MB/s average read speed.

Take this into consideration that on Write tests I'm getting around 340MB/s average write speed.

I'd half expect it to be the other way round, and would be happy with it that way :)

Any suggestions as to what the issue could be?
 
The write speeds could be higher due to the battery backed up cache telling the os it has written to disk when really it hasn't. You can force this cache on without a battery although you risk data loss in the event of a power failure/crash. If this is on, try turning it off and seeing what speeds you get.
 
The write speeds could be higher due to the battery backed up cache telling the os it has written to disk when really it hasn't. You can force this cache on without a battery although you risk data loss in the event of a power failure/crash. If this is on, try turning it off and seeing what speeds you get.

Thats a fair comment, which is why I would have expected the write performance to be a lot lower (I tested with write through and saw performance drop to sub-100MB/s speeds).

However, the Read speeds are just ... weird. I'd expect around a 3 Drive RAID-0 setup kind of Read speed, I.E somewhere around the 300MB/s mark for average reads....
 
I've never had what I considered great speeds out of Perc cards when using raid 5. I don't have any equivalent raid arrays configured to compare your figures against. Any raid 5 arrays I had were converted to raid 10. Besides, they use 10/15k sas drives.
 
Well, I could go RAID 10 instead, but then I'd probably get same speeds for Reads as what I get now as it'd stripped over 2 virtual disk mirror sets.

Ho-hum.

Hope someone else can shed some light :/
 
with a 4 Disk RAID0, stripe size set to 128 and a 256KB block test under HDTune

Put the block size up on your benchmark. 256KB isn't an efficient size for a 4 disk RAID5 (because it only reads from three drives at a time)

Running an ATTO benchmark will show you performance on different block sizes.
 
Thanks, I was using 256KB Block size test as I've seen quite a few other RAID5 tests on HD tune using that setting, so had something to compare too.

Anyways, the RAID 5 is no more, lol :D

And I agree on using ATTO, been testing it on the missus PC and seen results consistant with what she has setup, so feel I can trust it a bit better than HD Tune.

PC is back on my 320GB Drive with Vista on it, so can bench freely on the RAID card without the OS getting in the way.

Will post back when I have some results and further thoughts.
 
Ok, done some testing and heres the results:

Test Settings used across all tests -

MegaRAID Virtual Disk Options:
Stripe Size: 128KB
Read: Adaptive Read Ahead
Write: Write Back
IO Policy: Direct IO
Access Policy: Read Write
Disk Cache Policy: Enabled
Init State: Fast Initialization

Windows Disk Management Options:
Disk Initialised - MBR Partitions
Set as Basic Disk
Simple volume spanning whole disk
Quick Format: NTFS, Default Allocation Unit size

For RAID 5 and 10, a full Disk initialisation was performed prior to testing. All tests were done atleast 5 times to ensure they were consistent with each other.

RAID Zero (RAID 0)

raidzero.jpg


RAID Five (RAID 5)

raidfivepostini.jpg


RAID Ten (RAID 10)

raidtenpostini.jpg


It seems to me that RAID 10 gives the best consistency, averaging 300MB/s read over 128KB Block size and above. The 64KB result is a little bit of an anomoly, and was shown in the several tests made.

However, for best top end speed, RAID 5 seems to give the better results hitting 400MB/s at 1MB Block sizes and above. However, both the 128KB and the 256KB block tests show low results.

Obviously, RAID 0 trumps both of them, but doesn't offer the redundancy that I want..

Comments and thoughts ??
 
Hi Arthalen, interesting results - I nearly went for a perc/5i card a while ago, and if I had would have been doing this sort of thing. TBH I steered away from it as it would have been something I wanted to play with more than really needed.
There's an obvious trade-off between speed, capacity and fault tolerance here, and I get the impression you're more after speed. So as RAID isn't a substitute for backups, why not go RAID0 and get yourself a 2TB disk or two to backup to for the peace of mind?
 
Thanks for the comments, Wonko (Great name btw ;)

I've had a really good think about this, and at the moment I'm swinging towards a RAID 10 setup....
 
Back
Top Bottom