RAID Recomendations

Associate
Joined
24 Jan 2003
Posts
681
Location
Staffordshire, UK
I currently have 2 3TB WD Reds in raid 1 but I'm slowly approaching max capacity so I'm looking to double that to at least 6TB. I'm thinking of buying of buying another 2 3TB drives and raid them up into RAID 10 or 6, I'm leaning towards RAID 6 as I like the idea of adding a 5th drive to take the capacity up to 9TB when needed. Are there any major disadvantages of raid 6 for home use?

My raid 1 setup is currently hosted by my server's motherboard (Gigabyte GA-Z77X-UD5H) it obviously doesn't support support raid 6 but if I decided to go raid 10 would you recommend that I get an external raid controller over using motherboard raid? If external can anyone recommended a reasonable budget one? Looking around £150 for 8 ports to future proof future expansion.

My alternative is to just buy 2 6TB reds but looking at the price of them I'm majorly put off, gone up nearly £50 in 6 months? I was hoping they would be about £150 by now.
 
Raid 6 will require double parity calculations. These can be offset with the right raid card (difficult with you budget).

Raid 10 is fast and secure but, some may say, a little wasteful as the space available is half of the total storage of the drives.

I would suggest getting a raid card (HBA) rather than using MB raid to better allow portability. Bios updates / motherboard upgrades have been known to kill MB based arrays.

For entry level, go for a
  • LSI 9211-8i - Pretty much the de-facto entry level card. Used in many a home storage server (FreeNAS / Solaris etc).
  • LSI 9240-8i - This is essentially a 9211-8i with different firmware (raid 0 / 5 / 10 / 50). The raid is software based (on the card) though and so would not use raid 5 or 50 on it. I also believe a 'hardware key' is needed for the 9240 to do raid 5 / 50.
  • Dell Perc H200 (rebadged 9211-8i)
  • IBM M1015 (rebadgged 9211-8i - needs cross flashing to work in non-IBM servers - check the internets for how-tos).

I shop on the US auction site for used server parts when needed. They range from $50 -> $130 + shipping. I personally by from Hong Kong rather than China. THere is talk of cards being QC failed and finding their way on to EBay from Chinese sellers (same with network cards). Not had a dud yet but shipping can take a bit of time.

Cables are SFF-8087 -> 4 x SATA breakout cables. They are one way and so not interchangable. You must get them with the SFF-8087 at the host end like the Supermicro CBL-0188L. These are often called 'forward breakout' cables. Reverse cables are used to connect a server backplane with a SFF-8087 connector to a motherboards SATA ports.

Hope this helps.
 
Thanks for the info :)

Sounds like you side in the RAID 6 camp? but I just need to get the right RAID card?

The cheapest I can find from a reputable source is the Highpoint RocketRAID RR2720SGL (http://www.highpoint-tech.com/USA_new/series_rr272x.htm) which "seems" to support raid 6, are Highpoint any good?

Next cheapest raid 6 alternative from a uk store is the Silverstone SST-ECS01 but bumps nearly £100 on my budget :/

Last option is like you say auction sites, I'm always a little unsure of buying hardware of them due to questionable quality and sources but seems I can get a LSI MegaRAID 9261-8i for around £110, worth a punt you think?
 
I use ZFS which is a software based raid. It is one 9f the core features of the Solaris OS. It also has implementations in Linux and other storage server applances like FreeNAS.

The disadvantage is that you usually would run on a seperate machine / virtual machine and share the storage via your network to your desktop machine.

I have not used either of the highpoint or Silverstone cards you have listed. Highpoint are low end consumer raid cards. Silverstone are probably rebadged.

I would stick with either LSI or Adaptec.

Two other things with raid 6. Building and rebuilding in the event of a failure is limely to take a very long time. I used to hqve a Dell H810 wwhich was top of the line in its day. It took over a day to rebuild a multi drive raid 6 array. Second thing is to get a controller with a battery backup or ssd perm cache module. If it caches data and you loose power / crash, you may loose the cached data. If your data is only archived stuff then ot so bad running without but for system important data it is advised to have one.

If you cannot go zfs then i would likely to go raid 10 as it is simple and fast. Different companies use different techniques for calculating parity / storing the metadata which can mqke recovery if the card should fail.
 
Back
Top Bottom