Anyone running 4x SSD in RAID 0?

Associate
Joined
1 Dec 2005
Posts
803
I'm looking at running 4x ~256Gb SSDs in a RAID 0 array but just curious if anyone else has done this and what kind of numbers you get. I'll be limited to SATA2 speeds using a PERC 6 controller but since that should give in excess of 250Mb/s sequential read for a single drive, am I being overly optimistic to expect around 800Mb/s read for 4 drives?

What I do know so far is that on my AMD workstation I run a pair of 500Gb mechanical drives in RAID 0 and get 200Mb/s read/write and on the server at the moment there are 4x 250Gb mechanical drives in RAID 0 (3 are the same, one isn't, all quite old) and that gets about 230Mb/s read/write, but using a more basic SAS 6 card. I would expect that 4 decent modern mechanical drives of that capacity would see 350-400Mb/s - right? Or about half the performance of the SSDs, in theory.

Unless I'm way off with this :D
 
Your mechanical drives are hobbled by their access times even a single SSD will be faster. I've run ssd's singly, in raid 0 and as z68 flash cache and I can honestly say I've only noticed a performance difference between them in benchmarks. In real terms you're looking at differences of a couple of seconds maybe which is hardly noticeable.

It's a case of diminishing returns. Going from mechanical to SSD will be a hugely noticeable improvement. Going from single ssd to raid 0 SSd's will not really make much difference day to day. But if you can spend that much on getting amazing benchmarks results (with no trim) then go for it!

e: Meant to say I've got 4 SSD drives which at one stage were all in my main system with 1 as OS drive, 1 as app drive and 2 in raid 0 for games. I have changed that set up so that 1 is still my OS drive, 1 is flash caching a 1TB mechanical drive for all apps and games. Of the other two, 1 has moved into my spare laptop which as given it a new lease of life and 1 has gone into my media centre PC which has improved that immensely as well. I'm much happier with this set up as overall it has given me far more benefits than opening my games a couple of seconds quicker on the main system.
 
Last edited:
Completely understand, I just fitted an SSD in my netbook and it went from barely usable to almost as fast as my workstation.

The purpose of this exercise is to make the maximum use of a 10Gb network link, which is primarily going to be used for video editing (storing source footage and cache files).

Does my rough math stack up? I'm guesstimating 3-3.5x the performance of one SSD, which seems reasonable based on what I've learned from RAID 0 so far.
 
Ah right I see what you are doing. Is the PERC card PCI? PCI-e? PCI-X? that's more likely to be your bottleneck.
 
It's PCI-e, x8. Good for 3.0Gb/s I believe, but the LSI SAS1078 chip wasn't designed with SSD in mind so it won't be completely optimal. But it should still be much faster than mechanical drives as long as I get the caching options right.
 
Yes then I reckon your original math is about right. 3-3.5x performance is a good point to aim for. The only problem I forsee is that there will be no trim or GC on that controller which will degrade performance over time, in some cases quite drastically. This is what you need to look into properly before deciding to go with this. In the worst case scenario you'll need to break the array, zero-write each disk individually and then rebuild the array every so often.

e: also you'll need to look into partition alignment. I believe Server 2008 R2 will optimise ssd's automatically like Win 7 does but anything earlier and you'll have to manually align partitions using gparted or something similar. Hopefully the perc drivers are not to hard to get sorted in Linux.
 
Last edited:
Yeah that's been a concern in the back of my mind. I believe some drives have their own built-in GC that works independently of the controller/OS, the Crucial M4s possibly?

All the data will be backed up so it's definitely an option to periodically zero out the drives :)

When I've got the kit together I'll also test out RAID 5 since that ought to make drive cleaning a bit simpler, removing the need to restore from backup (although doing that is of course a good test of the backup!).

Yup will be running 2008R2 but I can always create a 1024/2048mb partition manually. Thinking about it though... if blocks are going to be spread around the disks is it such a big deal on an array?
 
I once ran 4 ocz Vertex 256gb SSDs in raid0 using sata 2. The speed I got was just over 700 mb/s but it was not very reliable. I think the lack of reliability was down to the Vertex SSDs.

Instead of using SSDs have you thought about using something like a Revodrive. If you use the right version in a PCIe 2 slot you can get much higher speeds.
 
Back
Top Bottom