Review my RAID plans please

Associate
Joined
29 Aug 2006
Posts
867
Location
South London
Hi guys, any chance you could give my proposed new RAID array plans a glance over. I would appreciate any input you could provide on my decision.

I plan to get:

1 x AMCC 3Ware 9690SA-8I-KIT PCI Express SAS Raid Controller

1 x Battery backup unit for the above controller for write caching (not sold on OCUK)

4 x Seagate Barracuda 7200.11 1.5TB SATA-II 32MB Cache

I intend on running those drives in RAID 10. But I also have a Raptor-X and another drive which I would like to run independently. Would it be wise to use these on the same controller as independent disks (for performance gains)? Or should I just keep them where they are on my motherboard?

All comments welcome.
 
Oh sorry I should have explained that. I needed a RAID solution that would transcend multiple operating systems on separate partitions. Basically I will be using this array as my storage partition which my different OS's can use. True hardware RAID is my only real choice as far as I can see, and I didn't want to compromise on performance.
 
Onboard RAID would still do the same job. You can set up onboard RAID before you load any OS onto the machine and the partitions will be seen by all OS's. Any mobo with ICH8 or better southbridges should be able to do RAID 10.
No need for hardware RAID unless you're doing parity calculations.
 
With respect, I don't believe that there is any onboard RAID solution (on an enthusiast level motherboard) that offers that kind of functionality. I mean to use this volume between both windows, various linux and other operating systems that won't work with software RAID due to the driver limitations. If you could provide an example of which this is not the case I would be interested to see it. My own motherboard is in my sig btw.
 
I would do some research on those 7200.11s there was a bad batch of the seagate 7200.11 drives that would randomly die... (afaik the seagate 7200.10 and 7200.12 are 100% entirely free of this problem).
 
No need to buy that controller. Even if you needed it to transcend systems (i.e. motherboards), a basic software RAID card (4 port) which supports RAID10 would be sufficient. If not, you can still get a proper RAID card but if you never intend to do RAID5, some can do away with the XOR processor which will save you a fair bit (~£200-300).

On top of that, I would recommend the WD Black 1TB drives. The seagates have been riddled with problems of late with firmware issues etc. Plus the Blacks give you a 5 year warranty. I know it sacrifices space but is worth it for a better disk.
 
Thanks for the heads up regarding the 7200.11's. I had no idea about those issues, but after reading up on it I have very much lost my appetite for them...

As it happens, the Western Digital RE3 would have been my second choice. Though it is a shame to loose the extra disk space. I understand what you're saying about the lack of requirement for the RAID5 functionality of the card. It's something I can do without, but I just can't seem to find a cheaper RAID card that has the driver support I need. If you know of one, can any of you link me to a manufacturer web site please? (I don't think that's against the rules)
 
With respect, I don't believe that there is any onboard RAID solution (on an enthusiast level motherboard) that offers that kind of functionality. I mean to use this volume between both windows, various linux and other operating systems that won't work with software RAID due to the driver limitations. If you could provide an example of which this is not the case I would be interested to see it. My own motherboard is in my sig btw.

It can be done with some fiddling - https://help.ubuntu.com/community/FakeRaidHowto

However your board doesn't support RAID 10, only the inferior Raid 0+1, so it's a moot point.
Still, 400 quid is a lot if you just want to run a few sata drives. You can get a Perc 5/i for less than a hundred quid on a popular auction site which will do the same job.
 
Last edited:
I think I can contribute usefully here, as I use linux, windows, and raid.

First off, if you use the onboard, you cannot run a raid and a normal OS drive at once. I bought a raptor and 4 storage drives planning on raid 10 with the raptor alongside and it just cant be done on a single controller. If you get a raid card, thats a different story. OS drive on the motherboard and storage on the raid card will outperform everything on one controller, especially as the onboad simply wont run both and I'm not convinced the card can either. This is based on my intel ich9, ich10r and on a long time on google.

Next, why raid 10? If running the OS from it I would understand, but with the OS on a separete drive raid 5 starts to look a lot more appealing. I appreciate the comments that the parity calculations take time and will benefit from a good card. However, if using the onboard system, it offloads parity calculations to the main processor anyway. This performance hit is difficult to measure, and definitely doesn't justify a 400 quid card.
Data moves to and from raid 5 more slowly than raid 10, and the redundancy suffers. However the capacity is far better, as soon as you have four or more disks it rapidly outshines the raid 10 economically. Further, adding more disks in the future to a raid 5 is frequently achieved while I've not yet heard of it working with raid 10.

I do not believe a raid 10 will actually show better performance on your system for most uses, if you're putting swap files and so forth on it then perhaps it is more justified. Your call, I went with raid 5.

I feel I should summarise that you'll need a raid card if you want an os drive that is not in the array, but it can be far cheaper than 400 and itll behave fine. The battery backup you mention leads me to wonder if this is an enterprise solution, but the raptor rules that out I think. A separete card should also let you move the array to a new motherboard when the time comes.

Finally, after writing what feels like far too much, how important is it that windows can access the array? The linux software raid (mdadm) is exceptionally good and is definitely cross-distribution. I assume osx can do this too since it looks for all the world like a custom unix. In particular, when things go wrong troubleshooting within linux beats the hell out of within the hardware raid config screen. Windows cannot normally see the array however.

I cope with this by running windows in virtualbox most of the time, where it can quite happily read and write to the software raid. On the rare occasions when I need windows to be able to access the hardware directly, there's enough space on the raptor to copy whatever I want from the raid anyway.

With luck the above makes sense. It would help if we knew exactly what you're doing with your system, since a lot of the above wont apply for other uses than standard desktop. Do get back to us :)

p.s. ubuntu read my onboard hardware raid out of the box without issue. I think the issues arise when you want an operating system installed onto the raid
 

I have done a little reading on dmraid after what you said and it looks like people have been succeeding with it after kernel 2.6.24 I have tried this method in the past without success which is why I discounted it as an option in the first place. I might have to give it another go! Also, how come those Perc 5/i controllers are so cheap?? I am amazed actually, they seem to have all the functionality I need except they are a fifth of the price of the 3ware RAID controllers.


Thanks very much for your contribution. I will go into more detail about how I want to use my system so it's a little clearer:

  • I mean to use multiple OS's on my computer as I enjoy using linux day-to-day but I also, occasionally, play games. So a Windows partition is needed. I keep all my data on my RAID array and very little is actually on the raptor itself.
  • I do regular backups onto a separate drive which is then kept off site. I didn't see that the redundancy of RAID5 would bring any benefit. Plus, the performance difference between that and the RAID0 I am used to looked awful on paper.
  • I have no more free SATA ports on my motherboard, they're all used up. I want more hard drive space, and I guess at a push I could run to 4 free 3.5" bays.
 
Ok, I have looked further into an LSI Perc 5/i controller and I have to say I am quite taken by the idea. The price for the convenience and performance of the unit certainly seems worth it imo. But I still have a couple of questions:

  1. Is it worth running my single, independent disks off the LSI controller along side the RAID as opposed to using the sata controller on the motherboard?
  2. Would I see any performance gains by doing this?
  3. Would I have to re-install the operating systems on those in order to get this to work, or could I just move them over straight away?
  4. Should I format the new RAID partition as NTFS and mount it in Linux, or should I format it in ext2 and use an ext2 driver for Windows?

Thanks for your help.
 
Ok, scrap the perc 5/i RAID controller. It doesn't have LBA64 and so cannot support partitions over 2tb. It now seems I am back to square one with the controller, the 3ware 9690SA is looking viable again. But the questions in my previous post still stand.
 
Ok, scrap the perc 5/i RAID controller. It doesn't have LBA64 and so cannot support partitions over 2tb. It now seems I am back to square one with the controller, the 3ware 9690SA is looking viable again. But the questions in my previous post still stand.

Funny, I'm using a Perc5/i and my partition is currently sat at 5TB, no problems at all.
Just use GPT.

1. Is it worth running my single, independent disks off the LSI controller along side the RAID as opposed to using the sata controller on the motherboard?
2. Would I see any performance gains by doing this?
3. Would I have to re-install the operating systems on those in order to get this to work, or could I just move them over straight away?
4. Should I format the new RAID partition as NTFS and mount it in Linux, or should I format it in ext2 and use an ext2 driver for Windows?
1-2. Not really any point in doing so, very very minimal performance to be gained.
3. If you plugged your card in before moving the boot disks over, you can load the drivers for the RAID card onto your OS, then you should be able to move the disks over and boot, after changing BIOS to boot from your raid card.
You might find your RAID card needs to reinitialize the disks, so it's probably best to take a ghost image just in case.
4) Use NTFS, Linux support is solid these days.
 
Last edited:
First off, if you use the onboard, you cannot run a raid and a normal OS drive at once. I bought a raptor and 4 storage drives planning on raid 10 with the raptor alongside and it just cant be done on a single controller.

I am hoping to do a similar thing on my new build. My new 750GB SATA2 as the boot drive and two old 160GB SATA1 drives in a raid0 array. The raid array will be useful as a backup area for the main disk and scratch disk for photoshop and the like.

As far as I can tell the on board raid controller on my gigabyte nvidia based motherboard can support some disks in the array and some not. It specifically mentions having a non-bootable raid array and another non-raid drive that hosts the OS. Admittedly it only offers raid 0,1,5,0+1, not the raid 10 the poster was after.
 
I believe what happens is that you have your controller in RAID mode, and have a RAID0 or RAID1 array with only one disk in it.
 
Back
Top Bottom