Post your hard drive benchmarks!

That is outrageously quick - I know the new intel controllers are good, but 230mb/s average from 4x500Gb WDs? Awesome - I guess my scsi days really are nearing their end. Any idea why these are so much quicker than dedicated Pci-e controller based solutions - I know I have an advantage for raids other than 1 and 0, but still - hardly seems worth it if the onboards are this good :eek:

I got 205MB/s average on the RAID5 I've got which I posted earlier in the thread http://forums.overclockers.co.uk/showpost.php?p=9461104&postcount=183
 
my damn pci 32 bit bus :(

hdd.jpg


should be a bit quicker with a fresh installation of vista and a defrag soon though :)

2 15k 18gb seagate cheetas in RAID0 on an adaptec 39320:)
 
I got 205MB/s average on the RAID5 I've got which I posted earlier in the thread http://forums.overclockers.co.uk/showpost.php?p=9461104&postcount=183

Very nice - but the problem is the CPU hit that you take - 15% isn't bad - thought it would be harder - but the advantage of my card is it does all the Xor calcs onboard, freeing up the CPU for the more important tasks (read gaming!) - still an excellent score though, I suppose I can take some comfort from my seek times, but I still think it's time to consider moving!
 
would that cpu utilization be completely inconsequential when it came to a quad core gaming rig though. doubt a single game in the next 2 years will actually be cpu limited on a quad core of any kind, so 15% of one core free shouldn't be hard.

i'm kinda thinking maybe going for a 4 drive raid0 for the speed, though i'm not so worried about drive failure say 4x250gb's, if it did die its 1tb of data lost, hmmm.
 
But when the 1 drive crashes, i'll be locked out of all/most of my data, or how does it work?

The system continues to function, the data is rebuilt from parity on the fly, so even if windows runs off the raid, it will not crash. The performance is massively reduced though if it's rebuilding.
 
That is outrageously quick - I know the new intel controllers are good, but 230mb/s average from 4x500Gb WDs? Awesome - I guess my scsi days really are nearing their end. Any idea why these are so much quicker than dedicated Pci-e controller based solutions - I know I have an advantage for raids other than 1 and 0, but still - hardly seems worth it if the onboards are this good :eek:


The reason onboard is so much quicker is its tied direct to the pci-e bus at whatever speed the bus is doing natively (16x or 32x i belive) when you go to an external controller your generally capped at 1x pci-e which is generally not so cool unless your running an expensive card. There are some 4x pci-e which help improve performance.

Edit: Meant your running at 1x unless expensive card not that an expensive 1x pci-e makes a difference. More expensive = 4x or better with many ports (eg Promise SuperTrack EX8350 8-Channel SATAii RAID5 Controller, PCIe-4x).

I'd REALLY like to see a lot of the flash drives in a raid 0 (say a good 8ish) as they are pushing a decent amount (say 60MB/sec) and give a flat transfer line as theres no rotational latancy. The seek times will also be grouped VERY low (in the region of a 4/5 ms band). Would be a VERY expensive setup (with current prices well into the 3/4k mark) but I doubt much any of us have seen would touch it. (would be talking around 450MB/sec with a 4-5ms access time).

A SLIGHTLY cheaper sweet spot at the moment would be 3/4 of em on a board with native raid 5 for windows, import docs and saves + 4 cheap 7200.10/AAKS on a cheap external controller for bulk (downloads/games/movies/etc).
 
Last edited:
been talking about it in the fusion thread, essentially there is no reason they can't raid 0 internally within a SSD drive. normaly drive, platters, moving parts and a limit of heads and moving parts, do to friction and heat/space. but ssd drives basically have a few tiny chips, they are small, there is no issue for space. a tiny chip to control some raid0 style access within the drive. 8x4gb chips raided inside the card. as of today all ssd drives should be able to max out the 300mb/s cable. its utterly ridiculous to bring out a single quite poor performing chip when multiple cheaper chips raided within the drive would be massively better. would also spread out the usage patter across the chips which could help as they do have a fairly limited read/write limit.

basically anyone can raid 8 normal sata drives now, but space wise its a pain. likewise 8 ssd's needs 8 ports and is just a pain. the goal would be raid 0 multiple smaller chips within the drive and for me that was 99% of the reason to make ssd drives at all, but they've screwed the pooch. even if they were 100% better than other drives, they'd still sell normal drives, tv, films, pron, all need big cheap storage and speed is basically not needed.
 
The reason onboard is so much quicker is its tied direct to the pci-e bus at whatever speed the bus is doing natively (16x or 32x i belive) when you go to an external controller your generally capped at 1x pci-e which is generally not so cool unless your running an expensive card. There are some 4x pci-e which help improve performance.

Edit: Meant your running at 1x unless expensive card not that an expensive 1x pci-e makes a difference. More expensive = 4x or better with many ports (eg Promise SuperTrack EX8350 8-Channel SATAii RAID5 Controller, PCIe-4x).

Hmmm - still doesn't quite explain it as you get about the following:

PCI Express 1x = 250 [500]* MB/s
PCI Express 2x = 500 MB/s
PCI Express 4x = 1000 MB/s
PCI Express 8x = 2000 MB/s
PCI Express 16x = 4000 MB/s
PCI Express 32x = 8000 MB/s

So while you would start to bump into limits, just, on a Pci-e controller, you should still be able to get over 200MB/s. And on my controller I'm into the 1000 MB/s range, so it's not the controller bus bandwidth that's slowing things down.
 
Can someone just explain this for me, as im trying to get my head around strip size. For example for my array which is based on 3 disks, if I was to select a 32k strip does that mean it would divide a 32k file by 3 (as there is 3 HD's)? So basically more of less 11k of data on each hard drive?

Or does it work different to that?
 
You're nearly there, basically the data is divided into 32K blocks so anything less than 32K is only on one disk whereas larger files will be split into however many blocks are required and distributed across the disks. It should be noted though that a block can contain multiple files so you don't waste space in the way that large FAT clusters can.
 
Back
Top Bottom