Stable RAID5 on ICH10R with 3x Samsung F4 2TB HD204UI's any1?

Associate
Joined
31 Dec 2010
Posts
21
Hi all,

I bought 3x WD20EARS for my ICH10R raid 5 setup. Turns out a big mistake, and i guess it's the problem with the TLER not being enabled / available on the WD20EARS. They failed twice on the Initializing, thus this is my 4th day trying to initialize the raid -.-
I'm using the Intel Rapid Storage Tool 9.6.

I'm looking to trade em in for 3x HD204UI's.
I can't seem to find any reports on problems with this drive, regarding to CCTL (supposedly the equivelant of WD's TLER). Does any1 know if they have the CCTL enabled, where as the WD20EARS have no more TLER?
I think this is one of the key factors in making a stable RAID5 (so they won't drop out of the array in case of a bad sector).

I hope these drives (with the necessary Firmware upgrade ofc. http://www.samsung.com/global/busine...bbs_msg_id=386) will result in a stable RAID5 via ICH10R onboard. I use an Asus P6T SE.

Im really curious if anyone have been running a stable RAID5 with the HD204UI's on ICH10R.


PS: What's with this alignment? I understand that when you use Win7 x64, there is no need to manually change something to get em aligned proper, yet some people claim that in a RAID array they need to be aligned?
Could any1 explain as to how and where u make this change, and especially, why?


Thanks in advance.

Regards,
Kami.

My setup:

Asus P6T SE, Bios v.0808
Intel i7 920 @ 2,6 GHz *stock*
ProlimaTech MegaHalems + 2 Cooler Master120mm fans
6 GB OCZ Gold PC3-8500U and 6 GB OCZ Gold PC3-10700U
nVidia GTX295
Sound Blaster Fatal1ty X-Fi
Cooler Master 1000W Real PowerPro
1x Intel X-25 M SSD 80G (SATA 0) (non-raid)
2x 300GB Maxtor 6L300S0 (SATA 1, 2) (non-raid)
3x 2TB WD20EARS (SATA 3, 4, 5) (RAID5)
Sweex PU102 SATA150 Controller
1x 1TB WD Caviar (PU102-Port 1)
1x GGW-H20L BlueRay (PU102-Port 2)
RWCooler Master Stacker 831 case with 6x Cooler Master 120mm fans
Windows 7 x64 Professional
 
Last edited:
Any1? Sorry for my persistance, but i am really curious what the experience of others is with this kind of setup, and wether i have to make some additional adjustments.

Thusfar i've done the following:

I flashed the HD204UI's with the latest firmware and popped 'em in a RAID5 array via ICH10R.
It's currently running the Initializing via the Intel Rapid Storage Tool, got em on 64k data stripe and enabled Write-Back cache just for the initializing. I'll turn it off when complete.

Anything i forgot or that is recommend? There is no need for me to do anything with the CCTL settings or with the alignment / 4k sectors?
 
There's no need to align.
RAID 5 is more of an ask than RAID0 or 1
3 drives adds more complication than 2
Motherboard RAID is not as good as proper RAID. And by proper RAID I don't mean a 50 quid addin card, I mean a 200 quid card with a real processor on it.]

I always recommend against RAID with non-RAID drives, and this is why. To the best of my knowledge that RAID array you have will degrade from time to time, and there's not a great deal you can do to prevent it.

Additionally RAID5 with 3 2TB drives is not a good idea, especially those drives as they're not enterprise class reliable.

If you have a drive failure then of the 4TB of data on there, there's a fairly significant chance you would have a read error on the other two drives, meaning a complete rebuild may fail. I've never experienced this so I don't know a huge amount about it.
 
Thank you for your reply. The main reason i'm using this setup, is $$$.
I don't feel like paying 150+ euros for raid drives and on top a 200 euro HW card.

However, if this will fail a moderate number of times (if, by all means), i'm looking to get me a Synology DS411+. It seems since they are dedicated and run on UNIX, there is no need to get TLER/CCTL enabled drives (but they have to support it tho, like the HD204UI that has it, and the WD20EARS that no longer have it); all commands are issued by the firmware of the NAS. Ofcourse, people say the Enterprise / RAID drives are better fabricated, thus more reliable, but it seems there are a lot of positive experiences on desktop drives with a dedicated NAS.

Any thoughts on this?
I'm not really sure if my definition of the NAS' commands regarding to TLER / CCTL is 100% correct. If any1 can correct me / confirm me, please do so :)
 
Last edited:
Well,

Here are the testresults on the write / read speed.

After 24 hours of initializing, i was dieing to toy around with the Stripe Size vs. NTFS Cluster Size. And the results are... suprising.

RAID%20speeds.jpg


I apologize for the large image.

Long story short; My best results on my current setup:
RAID5 / 3 disks (HD204UI's): 128 stripe size with 32k NTFS Cluster size, with Write Back Cache on.
In that configuration:

Seq. Read: 240 MB/s
Seq. Write: 261 MB/s.

I am stunned. I've read threads all over the place of people trying to get the best out of their setup while being stuck on ridiculous low write speeds, and i read on various forums, that the most optimal setting for a 3-disk RAID5 = stripe size x 2 (3 disks - 1 parity disk) must equal cluster size. Thus, thinking that RAID5 Stripe 32k = 64 Cluster.

As u can see in my results, in that config i only get:

Seq. Read: 219 MB/s
Seq. Write: 222 MB/s.

Further more, there aren't that much big differences in the various combo's.
There are 2 combo's that suffer heavily from having WBC turned off, but for the most, it didn't matter that much on Seq. Writing.

The thing that stood out to most and was kinda interesting, is that the 64k stripe / 32k cluster with WBC off only gives 25 MB/s Seq. Write, and the 128k stripe / 32k cluster gives around 38 MB/s.
With the further exception of 128k stripe / 64k cluster (168 MB/s), all is above 200 MB/s Seq. Write speed, wether WBC is ON or OFF.

So, basicly... im stunned.

Any comments on this? And maybe some feedback wether or not this 240 read / 260 write is good?

*And make a note here that i didn't do *ANYTHING* to manually align partitions or any of that.
This is almost Out of the Box installment, and frankly one has to try VERY hard to get BAD results, no matter what Stripe / Cluster combo you choose (See the results).


p.s.,
I didn't bother with letting the benchmark go through the entire 4k / 64k tests, as first glance they never got above the 0,8 mb/s.

However, when copying a 4 GB file from my SSD to the Array, i got 260-320 MB/s according to Windows and the file was there in no time.

This practical test was also done to make sure i didn't suffer from cache / benchmark polution. I did the the tests a number of times, and also re-created the entire volume / partition on every try.

I will run a final check with ATTO on the 128 Stripe / 32k NTFS Cluster combo after initializing and post the results.

Kind regards,
Kami.
 
Last edited:
After 24 hours of initializing, here are my final results:

Onboard ICH10R (Asus P6T SE)
RAID5:
3x HD204UI
128k Stripe
32k Cluster
WBC ON
No Manual Alignment or other tweaks needed.

READ: 251 MB/s
WRITE: 265 MB/s

RAID5.jpg


Wonder how long it stays stable ;)

For the ones that are interested, here is the data on CPU time vs RAID5 usage:

BLUE LINE: CPU USAGE %
RED LINE: ARRAY WRITE TIME %
GREEN LINE: ARRAY READ TIME %

First, the IDLE load:
CPU%20vs%20RAID5%20-%20IDLE.jpg



Next, the WRITE load, while writing a 3 GB file from my SSD to the RAID5 Array:
CPU%20vs%20RAID5%20-%20WRITE1.jpg


CPU%20vs%20RAID5%20-%20WRITE2.jpg



And finally, the READ load, while playing the DVD "300" from my RAID5 Array:
CPU%20vs%20RAID5%20-%20READ.jpg



As you can see, while:

IDLE: The CPU and ARRAY LOAD is close to 0%.
WRITE: The CPU LOAD is around 10% while at it's peak (ARRAY WRITE being used 80-100%)
READ: The CPU LOAD is between 5% and 10%. Funny thing is the ARRAY READ is virtually 0; probably cause all data is currently loaded into RAM.


Do note that this was tested on a i920 CPU with 4 Cores / 8 Threads (Hyperthreading) and 12 GB RAM.

Kind regards,
Kami.
 
Last edited:
This isn't going to be helpful, but please seriously consider a backup instead of raid. For nearly all home computers this makes far more sense. Raid 5 ensures that if one disk fails on you, the data is still available. If two fail or the rebuild goes wrong, then you've had it. However more critically, if the sata controller has a fit, then all three drives are down. Or a power supply blows, or good old fashioned accidental delete. It's great on a server where downtime is not an option, but on a home computer does it really matter if you lose half an hour or so to replacing the disk with the backup one?

I've gone from a four disk raid 5, to two disks with data on in a computer, and two disks with the same data on from last week in a cupboard. It isn't as exciting, and performance is somewhat worse, but short of a fire, I'm not likely to lose my data.
 
Hey Jon,

I am aware that RAID is not a backup.
My important documents are backup up to different drives (non-raid) and kept in sync with Robocopy. The RAID array is just going to hold movies and music. If it dies, it would be a PITA, but nothing to go really insane over.

I could buy an additional disk and go for a RAID1, but i don't like the fact i lose 50% capacity. Same as a RAID10 basicly, and if one of the stripe-pair disks fail, u are still SOOL.
RAID0 imo is the most unreliable, since u create a 200% chance on losing all data.

Considering the above, and adversed to storing data on a single drive (if it dies, you are SOOL for sure), i think RAID5 is the best solution for me.
 
Last edited:
Back
Top Bottom