Motherboard/CPU/PSU suggestions for 12 drive RAID6 setup

Associate
Joined
8 Feb 2008
Posts
890
Hi there,

Just bagged 12 x 4TB Hitachi Touro DX3 5400RPM drives for a server setup. Doesn't need to do anything crazy just run nas4free/freenas really as a storage server. I've decided on a case and storage caddies.

So that leaves

Motherboard
CPU
PSU
SATA PCI-E card with 12 slots+ (or 2 x 6 or whatever). Useable in FreeBSD 8+ as per nas4free/freenas requirements.


I'm planning to run it headless but built in graphics on CPU would be ideal if CPU choice isn't compromised. I'm a little stuck on how much CPU/PSU grunt I'll need here along with how to go about the RAID6. I will be doing the RAID6 array in software. Will need the array to at least saturate a Gb network connection in terms of CPU power. Fast rebuilds that have the drive as the bottleneck and not the CPU would also be a plus.

Power consumption is very important in this too.

TIA :)
 
I don't know how good the non-Solaris implementations of ZFS are these days, but that would be better than RAID6 (probably).

RAID6 gives you less space and more processing (and so slower writes) compared to RAID5. If you are after enterprise-class availability then you shouldn't be building a server like this. If you're just doing it for home use then why do you need RAID6 availability? Run RAID5 and have more disk space or one less disk using power.

You'll hit gigabit speeds for sequential reads no problem. Writes might be a bit slow. You'll struggle as fragmentation increases though because 4TB 5400RPM drives are painfully slow.
 
The RAID rebuild time on that size will be nuts and a second HD failure when the array is getting thrashed rebuilding will spoil your day.
 
Yep, RAID5 with that number of big, slow, consumer drives is an accident waiting to happen. Even RAID6 would make me nervous.

At least with ZFS (RAIDZ2?) you'd get drive scrubbing, which I don't think you'd get with a software implementation of RAID5 or RAID6.
 
I don't know how good the non-Solaris implementations of ZFS are these days, but that would be better than RAID6 (probably).

RAID6 gives you less space and more processing (and so slower writes) compared to RAID5. If you are after enterprise-class availability then you shouldn't be building a server like this. If you're just doing it for home use then why do you need RAID6 availability? Run RAID5 and have more disk space or one less disk using power.

You'll hit gigabit speeds for sequential reads no problem. Writes might be a bit slow. You'll struggle as fragmentation increases though because 4TB 5400RPM drives are painfully slow.

It's more the rebuilds I'm concerned with, maybe I've misunderstood and that a failure of a further single drive during a rebuild on raid6/raidz2 will not cause everything to be lost. That might have just been a dangerous assumption though, oops.

Massive throughput isn't strictly essential in all honesty, writes aren't important at all to me.

EDIT - In response to the above.

My main reason for getting the drives was their power requirements and I really need the storage space of the 4TB drives. Would 2 x ZFS arrays with say 5 drives in each be a better choice? I'm currently making do with 5 x 2TB drives in a HP Microserver and then 5 x 4TB in another HP Microserver both set as RAID5 arrays. Then have another 4TB drive in each....so things are already sprawling and I'm looking to consolidate into 1 box and get rid of the 2TB drives and 4TB 7200RPM Hitachi drives that I'm already using in those 2 Microservers.
 
Last edited:
How often do you expect rebuilds to happen? Is it really worth compromising over that relatively rare possibility if storage capacity is so important?
 
How often do you expect rebuilds to happen? Is it really worth compromising over that relatively rare possibility if storage capacity is so important?

Hopefully never of course, the last nasty incident was about 20 months ago where I stupidly rebuilt a RAID5 array with a drive that was faulty, thinking just a loose connection had occurred.

I'm kinda looking for suggestions I suppose. What would be the best configuration to go for here using the 12 drives I have maximizing capacity - but trying to minimize any potential "i want to kill myself" moments :p

Last time I got a degraded array message, which really was just a loose cable (after my previous experience) I immediately copied all data off the Microserver, turned it off, checked all cables, turned it on, array was fine but forced a rebuild anyway with a consistency check aftwards to make sure...

"burn me once" and all that....

Uptime isn't really a concern, data preservation most definitely is though.
 
Unless my maths is wrong just copying 4TB of data disk to disk would take over 7 hours, and that's assuming 150MB/Sec which is probably a bit optimistic. You've then got to add on any overhead from the parity calculations.

That's a long time to have an array in a degraded state.

There's also the problem that drives have a nasty tendency to fail in batches.
 
7 hours for an enterprise might be a while (depending on the situation) but for a home user? I'd be more worried about read errors preventing you from rebuilding no matter how much parity you have (although yeah, 6 is much better than 5 at this point).

Obviously I have no idea what this data actually is, but I'd hazard a guess that it is media-related and the fact writes don't matter would indicate it is for watching that media rather than creating it... Usenet has better retention and better backups than you're going to achieve at home and, as you can't watch all your "legally obtained" films at once and certainly not more than 1 or 2 per night, you've got quite a lot of time to rebuild your catalogue...

Bear in mind that RAID is an availability solution, nothing more. If this data is this essential to you, how are you backing it up?
 
7 hours for an enterprise might be a while (depending on the situation) but for a home user? I'd be more worried about read errors preventing you from rebuilding no matter how much parity you have (although yeah, 6 is much better than 5 at this point).

Obviously I have no idea what this data actually is, but I'd hazard a guess that it is media-related and the fact writes don't matter would indicate it is for watching that media rather than creating it... Usenet has better retention and better backups than you're going to achieve at home and, as you can't watch all your "legally obtained" films at once and certainly not more than 1 or 2 per night, you've got quite a lot of time to rebuild your catalogue...

Bear in mind that RAID is an availability solution, nothing more. If this data is this essential to you, how are you backing it up?

I really appreciate posting but ideally, I'm just looking for a suggestion to the matter at hand. I have a lot of media that isn't obtainable from dodgy sources that really would be a massive headache to obtain again. :)

I realize RAID is an availability solution, this isn't msision critical stuff, but it would emotionally upset me if it was lost so I'd like to do my best within realistic budget to protect it with a safeguard.

I'm quite content with the media being unavailable for even a week if it meant that it was safe at the end.

Of course it's a game of chance and I'd just like to minimize that chance of loss, understanding that it can never be eliminated fully.

The real issue, is that I've run out of space for my current setup, bolting on more stuff to the Microservers clearly isn't the way forward and is more silly than running a RAID6 array.

So my question would be, what would a better solution be.

Thanks for the input so far everyone.
 
I'd run ZFS RAIDZ2 or RAIDZ3 in preference to RAID5 or RAID6, mainly because the drive scrubbing should reduce the chances of a rebuild failing.

Thanks, do you know if RAIDZ2 offers any extra protection during a rebuild over RAIDZ, I'm a little confused as to whether during a rebuild, all drives are rebuilt or just the replacement if you get me.

Cheers.
 
RAIDZ1 has one parity drive the same as RAID5.
RAIDZ2 has two parity drives the same as RAID6.
There's also the RAIDZ3 option which has...

RAIDZ should be safer than the traditional parity RAID options when using consumer drives. Have a Google for a description of how it works and the claimed advantages, there's no point repeating it all here. The FreeNAS documentation is a decent starting point.
 
bremen1874 said:
Unless my maths is wrong just copying 4TB of data disk to disk would take over 7 hours, and that's assuming 150MB/Sec which is probably a bit optimistic. You've then got to add on any overhead from the parity calculations.
Real world numbers time: A full (>95%) 2TB 7200rpm takes about 48 hours to rebuild in RAID6.
4TB is roughly 96 hours per disk.

Two drives fail? Enjoy an even longer build time because of the additional rebuild IOPs.

2TB seems to be the 'sweet spot' for mechanical disks at 7200rpm in terms of rebuild times (considering likelyhood of additional failures). The advantage of going for more disks means you can have more spindles pushing IOPs.
 
Back
Top Bottom