RAID controllers - Whats the difference?

Permabanned
Joined
21 Jun 2007
Posts
423
Location
SPAM
So like many i'm redaing these forums and come across PCI RAID controller cards and onboard RAID being discussed.

I've heard comments like "RAID uses X amount of bandwidth, ive got 5 HDD's so PCI is better" HUH *scratching head*

Also heard people mention that if you are using a PCI card when it comes to MOBO upgrade time it helps, but how? And what would happen if you didnt have a card?

So which is better onboard or PCI card?
 
Which is better will depend on your requirements.

The controller has to be connected to the CPU/memory somehow and each method has a bandwidth restriction which has to be taken into consideration when deciding on the best solution to use. A decent single HDD will shift data at about 80Mb/s, a 2 disk RAID0 array can top out at 150Mb/s, my 8 disk RAID5 array (with pretty old disks) peaks at over 250Mb/s so you need to think how many disks will be attached to the controller and hence what's the best method of connection to the system.

The onboard controllers from Intel and NVidia are integrated into the mobo chipsets so sit directly on the system bus so for all intents and purposes have no real bandwidth limitation. It's worth noting that add-on mobo controllers from JMicron, SI, etc will be connected as either a 1x PCIe device or on older boards a plain PCI device. PCIe gives you 250Mb/s per lane so 250Mb/s for a 1x slot, 1Gb/s for 4x etc. Plain old PCI is limited to 133Mb/s which is shared across the whole bus so the chances of getting the full 133Mb/s are slim.

Looking at those numbers you don't really want to put a 2 disk RAID0 array on a PCI card/device as it will be slightly throttled, not greatly so but it's not the ideal solution. You could get away with more disks as individuals if they're not being accessed simultaneously but there are better options. PCIe gives plenty of bandwidth, especially with 4x & 8x cards but you need a mobo with enough lanes/slots to support these.

There is a certain logic in using an add in card, I've just moved my RAID5 array from an NVidia based XP box into an Intel based Vista box with no hassles whatsoever. There is a financial penalty for this of course but sometime it's worth it.

Generally I'd recommend onboard RAID for your normal 2 disk RAID0/RAID1 solutions, yes you have to reinstall everything if you move to another machine but it's easier to deal with than an add-in card. For more complex RAID5/6 configs then an add in card is essential, the performance (especially writes) will be better and there will be support for hot spares, capacity expansion etc.
 
rpstewart said:
PCIe gives you 250Mb/s per lane so 250Mb/s for a 1x slot, 1Gb/s for 4x etc. PCIe gives plenty of bandwidth, especially with 4x & 8x cards but you need a mobo with enough lanes/slots to support these.

Cool.

Next questions then. When you say PCIe give 250Mb/s per lane does that mean that you need 1 card for every HDD? Or if you dont need one card for every HDD by adding more HDD's to the one card does that mean the total bandwidth is shared between the HDD's?

Reason im asking is because im thinking of setting up RAID0 with 3 HDD's and want to get the best performance.
 
The 250Mb/s is per PCIe lane between the slot and the CPU. You can have multiple lanes to each slot - 1x slots have 1 lane, 4x slots have 4 lanes and hence 1000Mb/s of available bandwidth etc.

You can, of course, have multiple HDDs attached to each controller card and these then share the bandwidth available to the controller. A 1x slot is plenty for 4 disks, 4x will easily cope with 16 (although you'll only find 8 port controllers).

To be honest, for a 3 disk RAID0 array I'd be sticking with an onboard controller if you have one.
 
rpstewart said:
To be honest, for a 3 disk RAID0 array I'd be sticking with an onboard controller if you have one.

Your response has helped clear things up, so final question would I see a performance increase from using a PCIe card with 3 compared to 3 using onboard?
 
ChemicalKicks said:
Your response has helped clear things up, so final question would I see a performance increase from using a PCIe card with 3 compared to 3 using onboard?

Only if the onboard controller is on the legacy PCI bus (i.e. not the PCI-E bus.) The PCI bus has a limit of 133MB/sec theoretical, so in that case a PCI-E controller would therefore have more bandwidth to play with.
 
silversurfer said:
How does software raid or dynamic drive arrays fit into that comparison, cpu use is high but can the speed be ok?
There won't be much difference between OS controlled arrays and onboard ones. RAID0/1 performance will still be reasonable and RAID5 write performance will still be poor.
 
Back
Top Bottom