Adaptec 3805 Vs. Highpoint 2740

Sorry, but I think that's poor advice.

We are coming from different directions in this and to be honest this is not the best forum to post these types of questions as the vast majority of people responding are from enterprise environments and apply best practices from those environments to their answers. Some of which are total overkill for home environments for most users.

Consumer drives are fine in RAID 0 or in RAID 1. Western Digital claim, "Desktop Class Hard Drives are tested and recommended for use in consumer-type RAID applications (RAID-0 / RAID-1)".

I wouldn't expect them to say anything else when they have other products aimed at that market sector. It would be like Intel recommending an i3 for small business server rather than pushing E3 Xeons.

You may get away with using them in a RAID 5 or RAID 6 array, or you may not. The more hard drives there are in the array, the greater the chance of running into problems. I think you'd be unwise to build a RAID 5/6 array of 8 consumer drives.

I believe the OP is talking about 4 drives. He was talking about 8 drives when using 2.5" drives not 3.5" drives. 4 drives in raid 5 should be fine.

The main difference between a WD Caviar Black and a WD RE4 is not the MTBF, but TLER.

I disagree. Double the load/unload cycle count is clearly the main difference. The fact that TLER is tuned for arrays in enterprise environments using enterprise level of hardware is another aspect of these drives. Running the RE4s without a cached controller (as the OP was looking at doing software raid) is just asking for corruption as the low TLER will sometimes recover and expect the controller to resupply the data as it could not write it correctly in the small window available.

I fail to see what using an enterprise drive (or not) has to do with "zero downtime if a drive fails".

Although this could have been clearer, the point was that RE4 drives are less likely to fail in a Raid5/6 environment and so if you cannot afford to loose data or stand time in recovery from backup and you have supportive enterprise level hardware then get RE4 drives. If you can stand some downtime to recover from a backup, which most can for a home server, then high end desktop drives would generally do judging by real world reports from users. Of course I am happy to stand corrected if you have experience personally of using blacks in raid 5 an them having issues falling out of the array. I have had this with Greens due to their aggressive power management but not tried with blacks.

I don't know how the Adaptec card handles JBOD.

The same as any other non Adaptec card does really.

If Enigmo was to use his Adaptec card with BBU and its onboard cache and was willing to spend the extra 25 quid per drive for the RE4 then that is probably the best way to go. If using software raid for the reasons stated previously then blacks should be fine but make sure you have a backup just in case (should have this regardless TBH as raid is not a substitute for backup).

RB
 
Just to clarify; I was indeed originally referring to 8x 2.5" drives, but if moving to 3.5" format then I'd stick with 4x 2TB drives, and if I wanted more storage I'd create a seperate RAID5 array. Needless to say, all would have course be backed up to seperate NAS.
Regarding the consumer/enterprise drive compatibility, I am a bit worried about moving from one consumer drive that seems to be having major issues with an enterprise class controller, just to jump to another one. Given the issues presented by the card/drives/caddy, I am going to simplify the whole setup, abandoning it as a reasonable idea poorly executed :) 4x RE4 2TB drives in RAID5 is a good replacement and I am willing to spend the extra for the RE editions, I just now have to find them for not-silly money.

Just a couple of points, though:
Running the RE4s without a cached controller (as the OP was looking at doing software raid) is just asking for corruption as the low TLER will sometimes recover and expect the controller to resupply the data as it could not write it correctly in the small window available.
I have had 4x 500GB RE2s running in software RAID5 for a couple of years now with no issues.

We are coming from different directions in this and to be honest this is not the best forum to post these types of questions as the vast majority of people responding are from enterprise environments and apply best practices from those environments to their answers. Some of which are total overkill for home environments for most users.
I know :) But my setup is frustrating in that it usually falls beyond most people's home server setups (retiring my FC backplane simplified things, but I'm a sucker for exotic solutions) but isn't quite enterprise. My problems with this setup were proving difficult for tech support at Adaptec and Supermicro to answer, so I thought I'd try the forums where people might similarly have weird hardware combos.

I'm not convinced that using consumer drives is the cause of the OP's problems.
Part of my reason for abandoning this whole shebang. Three hardware components and I can't pin down the problem with any confidence, so I'm going with what should reasonable work; an enterprise class controller with four enterprise class drives.
BTW, I did look into TLER modifying software for these Toshiba drives, and apparently it is possible, but since it would totally invalidate any warranty and also make the drives work in a fashion they weren't designed to, I feel I'd just be setting myself up for more grief later, so knocked that idea on the head too.
 
Regarding the consumer/enterprise drive compatibility, I am a bit worried about moving from one consumer drive that seems to be having major issues with an enterprise class controller, just to jump to another one.

Yep I can see your worries and TBH when I put the Greens in a raid 5 and got masses of problems I scrapped the raid 5 and got some Blacks and put them in a raid 0. I would have grabbed another Black and built a raid 5 array to test for you (and for me) but I am looking to upgrade my 4x1TB Blacks to 2x2TB Blacks so getting another 1TB drive would be counter productive.

Given the issues presented by the card/drives/caddy, I am going to simplify the whole setup, abandoning it as a reasonable idea poorly executed :) 4x RE4 2TB drives in RAID5 is a good replacement and I am willing to spend the extra for the RE editions, I just now have to find them for not-silly money.

Sure. As mentioned just a quick search brought them up at 25quid more than the Blacks so not too bad per drive. Of course that is if you are happy to spend over 240 quid on a single drive.

Actually, just doing a quick compare, I am pretty surprised as the 2TB black in the UK seem to be around 200 quid where they are around 110 quid equiv over here. Sure the 20%VAT vs 7%GST makes a difference but even so....

Just a couple of points, though:

I have had 4x 500GB RE2s running in software RAID5 for a couple of years now with no issues.

That you have noticed. Data corruption on raid5 is insidious in that a block of data may be correct but the parity may become corrupt in certain circumstances. If you want to check then take a backup of your array, pull a disk and replace it with another disk and rebuild. Compare the before and after sets of data. The parity corruption issue is known as the raid5 write hole and centers around an issue occurring between the data being written and the partiy being written. Only on rebuild will the issue show itself. Take a look at BAARF (Battle against any raid F - the F being for Five, Four and Fwee :)). There are a number of writeups on the issues there.

I know :) But my setup is frustrating in that it usually falls beyond most people's home server setups (retiring my FC backplane simplified things, but I'm a sucker for exotic solutions) but isn't quite enterprise. My problems with this setup were proving difficult for tech support at Adaptec and Supermicro to answer, so I thought I'd try the forums where people might similarly have weird hardware combos.

Join the club. I am in the same position what with trying to get 22 hard drives running with a desktop motherboard and VT-d instruction sets working for an ESXi build when there is little info on what boards support them and even then the information is sometimes just blatantly wrong. Having run a 7x9GB 15K Cheetah uSCSI array in an external CDROM tower case in the bedroom with home built fans and jerry rigged led activity lights which I used mainly for unraring newsgroup files (total waste of the power but those files didn't half unrar fast :p) 10+ years ago I know where you stand.

Part of my reason for abandoning this whole shebang. Three hardware components and I can't pin down the problem with any confidence, so I'm going with what should reasonable work; an enterprise class controller with four enterprise class drives.

Gut feel is the drives are the problem and not due to them being laptop consumer level drives but due to the same reason I had an issue with the Green drives when I started looking at raid 5. Due to the aggressive power management in the Green and laptop drives they spindown after a very short inactivity period. This causes issues with them powering up before the controller marks then as failed / missing. Desktop non-power saving drives tend not to have this problem if not set to spindown in the machines power management options so are more reliable and less likely to fall out of an array.

Also note that if you hunt around then you may be able to pick up some new 2TB SAS drives for only a little bit more than the RE4s.

RB
 
Also note that if you hunt around then you may be able to pick up some new 2TB SAS drives for only a little bit more than the RE4s.

RB

As a home server user, would I see any differences between a 4xRE4 setup and, for example, a 4x Seagate Constellation ES setup?
 
As a home server user, would I see any differences between a 4xRE4 setup and, for example, a 4x Seagate Constellation ES setup?

Seagate have a rough overview here comparing SAS vs SATA which you may want to take a look at.

One upside is better data integrety checking but a big downside for the home user maybe the fact that a SAS drive may need a SAS controller so if your SAS controller failed (a possibility but they are usually pretty robust if cooled reasonibly and not DOA) you have no way of connecting the drives. I said may need in the above statement as the Constellation has both SAS and SATA interfaces. I will have to defer to someone with first hand knowledge of the drives to comment on if you can use then on a SAS controller and then take them out and plug them in to a SATA controller an still get all your data. As long as the SAS controller is not providing raid for the drives I would have thought it should be fine but cannot confirm.

Another big upside with SAS drives is usually their IOPS ability or Input Output Per Second. SAS drives are usually geared to be able to handle many more transactions per second as they are aimed at multiuser enterprise environments. For a single use in a home environment the effect of this should be minimal unless you are firing large volumes of transactions, read/writes rather than data, at the drives. A database system with lots of small data changes per second would be a fair example.

The RE4 is 3GB/s while the Seagate is 6GB/s. This really only affects the drive if you are connecting multiple drives to a single eSATA link or are using SAS expanders to connect many more drives to your controller. Note some SAS controllers are 6Gbps on SAS only and dop to 3Gbps for SATA. It may be in a footnote or hidden in the tech spec.

Personally, I would either get SAS and be happy to pay for the extra features or wait for WD prices to fall. As the article linked above states, it is only after you start raiding 4+ drives that you will probably start to see a performance increase due to the extra error checking the SAS drives do.

RB
 
Back
Top Bottom