RAID 0 and RAID 10

Associate
Joined
7 Mar 2006
Posts
143
Location
London
I currently have 4 x 1TB hard drives in raid 10. Partitioned so that the software is seperate to my data. I'm thinking of starting over by having 2 RAID arrays set up across all 4 disks; RAID 0 for the operating system, and RAID 10 again for the storage partition. Can I do this? Would you think I'll see a significant performance improvement?
 
Intel's Matrix RAID can do some wacky things with mixed RAID arrays but I've never heard of it mixing RAID0 & 10.
 
When you say "wacky", do you mean bad things? Or just that it has lots of weird features?

If I can't do RAID0 and RAID 10, then I might consider doing RAID0 and RAID 5 - have you heard of this being done?
 
Yeah, wacky as in bizarre features. Mixing RAID0 and 5 is certainly possible but don't expect stellar write performance on the RAID5 array.
 
Intel's Matrix RAID can do some wacky things with mixed RAID arrays but I've never heard of it mixing RAID0 & 10.
Yep, that can use same disks in multiple different arrays but rule is that RAID allows using disk only in one array.
 
btw

never rely on your raid setup as a backup solution

I'm not, I also use an external backup drive to store a second copy of my most important and critical files. I'm employing a raid setup to reduce the probability of losing critical and non-critical data in case of a drive failure. However I'm now looking at changing the system partition so that it's in raid 0 to improve system performance.
 
cool cool, sounds like you know what you're doing

, just didn't want you to be another one of those "my raid died, and i lost 1000000000000 photos, wtf" people :)
 
In short, yes you can do it as I just tested it on my Areca card, but performance will all depend on the drives and your RAID card/chip. It is some times referred to as Matrix RAID as Intel got there first :)

Its splitting hairs, but you are actually talking about RAID sets and volume sets.

So 4x1TB drives will form your RAID drive set.

Then you can create a Volume set of 100GB in RAID0.

And then create a second Volume set of the remainder in RAID0+1.

The theory that short stroking the drives in RAID0 for maximum performance is pretty sound depending on your setup, but should work fine.

Theres a better explanation here: http://www.intel.com/design/chipsets/matrixstorage_sb.htm
 
In short, yes you can do it as I just tested it on my Areca card, but performance will all depend on the drives and your RAID card/chip. It is some times referred to as Matrix RAID as Intel got there first :)

Its splitting hairs, but you are actually talking about RAID sets and volume sets.

So 4x1TB drives will form your RAID drive set.

Then you can create a Volume set of 100GB in RAID0.

And then create a second Volume set of the remainder in RAID0+1.

The theory that short stroking the drives in RAID0 for maximum performance is pretty sound depending on your setup, but should work fine.

Theres a better explanation here: http://www.intel.com/design/chipsets/matrixstorage_sb.htm

That is a great explanation! Infact, and I'm not just saying it, your post has been the most helpful I have received here on Overclockers - especially considering you have gone out of your way to test it on your RAID controller. Thank you! :D



Now anyways, I think I have come to a conclusion. I will have a RAID drive set of 4 x 1TB drives. I have been thinking of which would be better; RAID 5 or RAID 10 for my second volume set. The two options are:

RAID 10 option:
Volume set 1: RAID 0, 320GB
Volume set 2: RAID 10, 1700GB

RAID 5 option:
Volume set 1: RAID 0, 1000GB
Volume set 2: RAID 5, 2000GB

Even though the write speed of RAID 5 is slower than that of RAID 10, I think that it's a good compromise for an extra 1TB of hard disk space. If I needed hard disk space with a faster write speed - for example when working with video, I could use the extra space on the system volume while working with the files and when finished I could transfer the finished files to the storage volume. Also the majority of the files on the storage drive are not being edited or changed, and only read from disk - and I am led to believe that the read speeds of RAID 5 are comparable to RAID 10.
 
Last edited:
That is a great explanation! Infact, and I'm not just saying it, your post has been the most helpful I have received here on Overclockers - especially considering you have gone out of your way to test it on your RAID controller. Thank you! :D

You are welcome :)

Its an interest of mine through work, so I would rather educate people to make their own decisions than say "Do this, cos I said so, like some around here!" :)

If you fancy reading more, try this: http://en.wikipedia.org/wiki/RAID

And if you are still awake, this takes the principles to the next level: http://en.wikipedia.org/wiki/Nested_RAID_levels

Have fun :)
 
I did some playing about last night and this morning - testing different stripe sizes, and today made a decision to use the following set up:

System volume set: 300GB in RAID 0 with 32kb stripe size
Storage volume set: 2.5TB in RAID 5 with 128kb stripe size

I dicovered that 64kb stripe used 5% more CPU time than 128kb, and 32kb used 5% more than 64kb. I chose 32kb, because I noticed that windows would boot quicker, and several programs would load quicker. I had decided not to use 128kb for my system drive, and so thought that wasting 5% CPU time so that programs loaded quicker would be justifiable.
However for the storage drive I decided to use 128kb stripes, as 90%+ of the files on this volume are very large files, and using a smaller stripe size would reduce performance when reading from this volume.

I also decided to resize my volume sets from 1TB in RAID0 and 2TB in RAID 5, to 300GB in RAID 0 and 2.5TB in RAID 5 so that I could store more on my storage volume. I had tested the write speed of the RAID5 volume last night, and found it sufficient.

I have also found that by resizing the RAID0 volume to 300GB from 1000GB improved the seek time in HDTach and HDTune by 2/3 ms, however I do know that these are just benchmarks, and that really my RAID setup isn't any faster - because the benchmarks are only running over 30% of what they were running over before.
 
Last edited:
Back
Top Bottom