Ubuntu and RAID

Ahhhh to many people to quote :p

First off let me explain the reasoning for the statement below.
But Raid0,5 should be a no no with any data you value.
Here i'm referring to software raid at the partition level. IE you've created x amounts of partitions over 2 disks and created a raid 5 disk with that. This is a IO nightmare for the disks!

Admittedly i'm not upto date on the current level of development/capability's of software raid solutions. But at every thing I've seen points to adding more points of failure to the loop and degrades disk performance.
 
Here i'm referring to software raid at the partition level. IE you've created x amounts of partitions over 2 disks and created a raid 5 disk with that. This is a IO nightmare for the disks!
Agreed, but if you're running a RAID array with two or more components on the same physical disk you're Doing It Wrong (tm). It's not a fault of software RAID, if anything an example of how flexible it is - especially for example as suggested by BigglesPiP if you're wanting to do a bit of short term experimentation and don't care about performance.
Admittedly i'm not upto date on the current level of development/capability's of software raid solutions. But at every thing I've seen points to adding more points of failure to the loop and degrades disk performance.
The 551.2MB/s average (668.7 max, 452.4 min) read rates from my mdadm RAID5 array all with unnoticable CPU usage suggest performance is very much not an issue. Hardware RAID controllers are an extra point of failure, and the absence of them in the case of software RAID is very much one *less* point of failure.
 
The 551.2MB/s average (668.7 max, 452.4 min) read rates from my mdadm RAID5 array all with unnoticable CPU usage suggest performance is very much not an issue. Hardware RAID controllers are an extra point of failure, and the absence of them in the case of software RAID is very much one *less* point of failure.


Interesting, what sort of setup have you got for that sort of throughput? Im gonna have a look into mdadm :)
You could argue the raid card is the same point of failure as the on-board disk controllers, only you wouldnt have to replace the motherboard in the event of a failure but thats just being pedantic :p
 
Interesting, what sort of setup have you got for that sort of throughput? Im gonna have a look into mdadm :)
Eight Samsung 1TB F3s, four connected to the six channel onchip Intel SATA controller (Asus P6X58D-E motherboard - http://www.overclockers.co.uk/showproduct.php?prodid=MB-399-AS&groupid=701&catid=5&subcat=1692) and four connected to a pair of Startech 2-Port SATA controllers (1x PCI-Express - http://www.overclockers.co.uk/showproduct.php?prodid=CC-004-SR&groupid=701&catid=49&subcat=424).

The eight drives are configured as a single mdadm RAID5 array, so capacity is 7TB. It's formatted as a single ext4 partition which gives 6.3TiB of usable space.

The operating system is on a separate Samsung 1TB F3, connected to the motherboard's Marvell 2-port SATA controller (same chip as the Startech controllers).

...to replace the motherboard in the event of a failure but thats just being pedantic :p
True, but there's also the option of buying add on controller cards as I have (though that was due to lack of onboard ports), and here any controller is pretty much as good as any other so there's no vendor lock-in :p.
 
Eight Samsung 1TB F3s, four connected to the six channel onchip Intel SATA controller (Asus P6X58D-E motherboard - http://www.overclockers.co.uk/showproduct.php?prodid=MB-399-AS&groupid=701&catid=5&subcat=1692) and four connected to a pair of Startech 2-Port SATA controllers (1x PCI-Express - http://www.overclockers.co.uk/showproduct.php?prodid=CC-004-SR&groupid=701&catid=49&subcat=424).

Any thought of moving your drives to a single HBA like the LSI 1068 (8 port PCI-e*8). You cna get a Dell Prec 6/ir (based on the LSI 1068 controller) from e-bay for around 80 quid plus cables. Do not buy adaptec cables though as they seem not to work as I know to my insanity trying to get the controller to work.... There are places selling the LSI 1068 new without cables for 95 USD. I have just bought two and they are great for software raid where you want to add lots of drives. The cables are not stunningly expensive either.

Ahhhh to many people to quote :p

First off let me explain the reasoning for the statement below.

Here i'm referring to software raid at the partition level. IE you've created x amounts of partitions over 2 disks and created a raid 5 disk with that. This is a IO nightmare for the disks!

Yeah, agreed. A single drive with multiple partitions being made in to a raid array is not the best idea ;).

RB
 
Any thought of moving your drives to a single HBA like the LSI 1068 (8 port PCI-e*8). You cna get a Dell Prec 6/ir (based on the LSI 1068 controller) from e-bay for around 80 quid plus cables. Do not buy adaptec cables though as they seem not to work as I know to my insanity trying to get the controller to work.... There are places selling the LSI 1068 new without cables for 95 USD. I have just bought two and they are great for software raid where you want to add lots of drives...
As far as I can tell that would not improve my setup in any way, while costing money and also meaning that the cables would have to exit and then reenter the case (drives are in the case, see http://forums.overclockers.co.uk/showthread.php?p=18007946#post18007946). Correct me if I'm wrong though. :)
 
If you know of a relatively cheap card (without any RAID) that uses JMicron controllers to give 8 internal SATA ports I would really like to hear about that though; it would a) let me consolidate things all onto one card and moreover would b) allow me to use Fedora rather than Kubuntu (the Fedora kernel doesn't seem to detect the Marvell chips used on the current controllers at all, whereas Kubuntu does, and I know that both work with JMicron chips from my previous motherboard).
 
As far as I can tell that would not improve my setup in any way, while costing money and also meaning that the cables would have to exit and then reenter the case (drives are in the case, see http://forums.overclockers.co.uk/showthread.php?p=18007946#post18007946). Correct me if I'm wrong though. :)

Err exit and re-enter ??. If you get a Dell 6/ir (two internal 4 port connectors) or the LSI 1068 based card like LSI SAS3081E-R PCI-E Express SAS/SATA Host Bus Adapter then it is all internal. You put all your storage on the one 8 lane PCI card and save yourself a slot. You also can have more room on the motherboard sata controllers depending on what you want to put where. Another option is to go the BackBlaze way and use sata expanders. Parts list is included in the link.

If you know of a relatively cheap card (without any RAID) that uses JMicron controllers to give 8 internal SATA ports I would really like to hear about that though; it would a) let me consolidate things all onto one card and moreover would b) allow me to use Fedora rather than Kubuntu (the Fedora kernel doesn't seem to detect the Marvell chips used on the current controllers at all, whereas Kubuntu does, and I know that both work with JMicron chips from my previous motherboard).

Not that I have tested but the LSI 1068 works fine with Fedora 14 as that is what I am using. I just need a PCI card that works with my motherboard and I will have 20 drive bays hooked up (only around 10 populated so far though :D). This is more or less the setup I am working on (not my actual server).

RB
 
Err exit and re-enter ??.
My bad, the first image I found was one with external connectivity and for whatever reason I just stopped there.

The Supermicro AOC-SASLP-MV8, 8-Port SAS/SATA Card is the most appropriate card I've found yet in that it's just about within budget, but I a) haven't found a UK supplier with stock so would have to pay for delivery from the US and b) it uses a Marvell chip, which is exactly what I'm trying to get away from (though one description of it does state that Fedora 14 supports it).

The LSI SAS 3081E-R is just too much.

So to further refine my requirements:
PCI-E card with 8 ports, SAS connectors accepted, compatibility with all modern Linux distros required, SiS chipsets preferred but anything that is known to work accepted. Budget £100 max. Prefer not to have RAID support at all.
 
Lizard; I agree with you re fakeraid/sata-raid "controllers" you see on motherboards. But Linux RAID isn't going to disappear on you unless you do something stupid, which you can do with any hardware array. You should have a UPS with Linux RAID, although you can nearly always recover the filesystem from a power cut, even mid-write.

Any data you value should be kept on more than a RAID array, the RAID just protects data in the very short term from hard disk failure and helps you achieve uptime through hard disk failures.

Data you value should be on an array and archived off site.

As for levels, RAID 0 should not be used (said the man with the RAID 0'd barricudas). If you value performance, go for RAID 1 or RAID 10. If you value capacity go for RAID 5 or RAID 6.

You say it adds more points of failure, but I really don't see where, Hardware RAID controllers have a software stack too...
 
Last edited:
My bad, the first image I found was one with external connectivity and for whatever reason I just stopped there.

Yeah, there are a few versions :).

The LSI SAS 3081E-R is just too much.

There are a couple of Dell Perc 6/ir cards on 'that auction site' for 80 quid plus 15 quid shipping. They come from China but I bought one from these guys and had no issues. Fast delivery and when I got the right cables, worked like a charm. The Perc 6/ir uses the same LSI controller as the LSI 3081E-R but different cables. Don't get Adaptec cables for this card as they do not work (or didn't for me and almost drove me mad until I got some other ones).

You would need to get the cables for it. I got the DeLock SFF8484 to 4xSATA which came in at 10 quid each (free shipping from Germany to the UK). Would take you slightly over budget.

So to further refine my requirements:
PCI-E card with 8 ports, SAS connectors accepted, compatibility with all modern Linux distros required, SiS chipsets preferred but anything that is known to work accepted. Budget £100 max. Prefer not to have RAID support at all.

I also have an Adapted 1405 4 drive HBA card which uses the same cables as the LSI controller (mini-SAS) and has no raid functionality but that was around 115 quid from a internt-book store.

Cables (SFF8087 x 2) @ US$16 each (plus shipping @ ??)
LSI SAS 3081E-R @ US$96 (free world wide shipping from Hong Kong)

Total: US$128 so around 80 quid

I bought two sets of the LSI and one set of the Dell and had no issues with delivery etc.

Drop me a mail and will let you know where the LSI cards are for that price. Obviously I cannot link here. Not really a OCUK competitor as they just sell lots of junky things like USB cup warmers usually but seem to have got these cards at a very good price.

RB
 
Thanks RimBlock but I'm ending it here because my current setup does the job, albeit constraining me to Ubuntu. My posts were more out of idle curiosity/research for future than any desire to immediately replace the Marvell controllers - I can't justify spending time fixing something that isn't broken.
 
Back
Top Bottom