I have finally managed to get my 4 Drive array running again and have spent the afternoon benching it with some surprising results. Some good some bad, but certainly an education.
I'm running:
4x72Gb 15k.4 Seagate Cheetahs
Dell Perc 4e/dc (Lsi MegaRAID 320 2e) - Dell boot drivers/Lsi Windows drivers
The drives are split between the 2 channels with 2 drives each.
There are also 3x18Gb 15k.3's and a single 36Gb 15k.4 spread across the 2 channels.
I spent about 3 hours ripping out the old config and installing the new drives (Got an enclosure for the 4 new ones, but not only did it turn out to be u160 only, it also corrupted my Windows install so I had to reinstall everything )
Eventually got everything in - some in my Tower, some in my home build enclosure (Channel 0 is in the Tower and 1 is in the other) and fired it up - ran the default setting and got the shock of my life:
Using HDTach - only really interested in the sustained Read:
64kb - Write Back - Read Ahead - Direct IO (Raid 0)
127 Mb/s was a massive disappointment as I was getting more out of my 2 Disk Array with the older 15k.3's! So I began playing:
128Kb - Write Back - Read Ahead - Cached IO (Raid 0)
Better - but still only just faster than my old setup. This was the only one of these that I did a Cached IO (using the onboard 256mb of RAM instead of writing straight to system RAM) - I found that on most of the setups it gave a slower average speed, but as you can see it is a far 'flatter' graph giving a far more consistent speed - be interested to see what this does in large level loads!
32Kb - Write Back - Read Ahead - Direct IO (Raid 0)
Nice...
16Kb - Write Back - Read Ahead - Direct IO (Raid 0)
>170mb/s - now we're talking - this is a lot closer to what I was hoping for! It's the fasters speed I was able to achieve using the 4 drives in Raid 0 - but I still think there's something holding them back - the Drives should be able to sustain well over 200mb/s, and although all of these top out at 220ish max read I found that at 32Kb it went all the way up to 260mb/s!! Now that would be awesome.
Obviously I played around with each of the variables, and I have to say they made little difference.
Write Back/Through: About 2-3% faster doing Write back
Read Ahead: Again - forcing it rather than allowing Adaptive 2-3%
Direct/Cached IO - Depended on the Stripe size - some took a large hit, others didn't but the graph was always 'flattened' by using the onboard Cache.
Burst Speeds: HDTach is notoriously unreliable for Scsi drives - so I tend to ignore Burst speeds, but it was interesting to see the effect using the OnBoard Cache had - nearly doubling the burst speed. Although this wasn't true when I tried it at 16kb for some reason.
I'm running:
4x72Gb 15k.4 Seagate Cheetahs
Dell Perc 4e/dc (Lsi MegaRAID 320 2e) - Dell boot drivers/Lsi Windows drivers
The drives are split between the 2 channels with 2 drives each.
There are also 3x18Gb 15k.3's and a single 36Gb 15k.4 spread across the 2 channels.
I spent about 3 hours ripping out the old config and installing the new drives (Got an enclosure for the 4 new ones, but not only did it turn out to be u160 only, it also corrupted my Windows install so I had to reinstall everything )
Eventually got everything in - some in my Tower, some in my home build enclosure (Channel 0 is in the Tower and 1 is in the other) and fired it up - ran the default setting and got the shock of my life:
Using HDTach - only really interested in the sustained Read:
64kb - Write Back - Read Ahead - Direct IO (Raid 0)
127 Mb/s was a massive disappointment as I was getting more out of my 2 Disk Array with the older 15k.3's! So I began playing:
128Kb - Write Back - Read Ahead - Cached IO (Raid 0)
Better - but still only just faster than my old setup. This was the only one of these that I did a Cached IO (using the onboard 256mb of RAM instead of writing straight to system RAM) - I found that on most of the setups it gave a slower average speed, but as you can see it is a far 'flatter' graph giving a far more consistent speed - be interested to see what this does in large level loads!
32Kb - Write Back - Read Ahead - Direct IO (Raid 0)
Nice...
16Kb - Write Back - Read Ahead - Direct IO (Raid 0)
>170mb/s - now we're talking - this is a lot closer to what I was hoping for! It's the fasters speed I was able to achieve using the 4 drives in Raid 0 - but I still think there's something holding them back - the Drives should be able to sustain well over 200mb/s, and although all of these top out at 220ish max read I found that at 32Kb it went all the way up to 260mb/s!! Now that would be awesome.
Obviously I played around with each of the variables, and I have to say they made little difference.
Write Back/Through: About 2-3% faster doing Write back
Read Ahead: Again - forcing it rather than allowing Adaptive 2-3%
Direct/Cached IO - Depended on the Stripe size - some took a large hit, others didn't but the graph was always 'flattened' by using the onboard Cache.
Burst Speeds: HDTach is notoriously unreliable for Scsi drives - so I tend to ignore Burst speeds, but it was interesting to see the effect using the OnBoard Cache had - nearly doubling the burst speed. Although this wasn't true when I tried it at 16kb for some reason.