Server 2003 RAID5 Performance

Associate
Joined
3 Jan 2007
Posts
462
Location
London
I'm just about to quit using my nVidia boards built-in RAID in favour of using Server 2003's software-only RAID. Anybody know how much of a performance hit I can expect?
 
TBH, I'm torn as to what to do, in the short-term anyway. I'd like to have proper hardware RAID but there's the issue of cost, and also I'd like my 4x500GB drives in a RAID 5 array, but the rest of my drives as simple discs, which brings the issue of manufacturers and their somewhat 'flexible' definitions of JBOD.
At the mo, I just want rid of the nVidia RAID and want to know if software RAID will be so slow as to be unusable until I get a proper hardware RAID5 card in a few months. I do use the server to stream media to my PVR box, that's prolly the most disk-intensive thing it does.
 
RAID cards vary a lot in price. Just be aware of the difference between the "software" RAID cards and the "hardware" ones; i.e. where the XOR calculations are done.

You could always leave your single disks connected to the motherboard - in fact might be better that way so long as you don't need to transfer data between the RAID5 and the single disks.
 
RAID cards vary a lot in price. Just be aware of the difference between the "software" RAID cards and the "hardware" ones; i.e. where the XOR calculations are done.

You could always leave your single disks connected to the motherboard - in fact might be better that way so long as you don't need to transfer data between the RAID5 and the single disks.


Prolly best if I clarify what's going on: What I've got at the moment is four useless discs. I have 6xSATA ports on my 680i board, two are taken up with a mirrored 160GB boot, and the other four with 500GB drives in RAID5, which is all nice. But in my tower I also have 2x250 and 2x300GB drives plugged into a PCI Sil3114 card. Slow and dirty, but it does the job, or at least should do if nVidia hadn't tw@tted about with the board BIOS. A BIOS upgrade a few months ago meant that no 3rd party i/o cards with a bootROM would work any more as there was no longer enough memory left after the nVidia RAID boots. The only way of getting 3rd party cards to work is to disable the nVidia RAID, and as it's only 'hardware-assisted software RAID' anyway, it now seems like a reasonable option. I /would/ go back to an earlier BIOS revision where the 3rd party cards worked but trying that in January killed my RAID array anyway and cost me 3 weeks recovery time, and those earlier BIOSes were also less stable.
I will be getting a proper RAID5 card eventually, an Areca or an Adaptec 3805/31205, but I can't predict when that will be and in the meantime I have 1.1TB of storage lying idle because of nVidia and their stupid BIOS revisions. Couple that with the fact that I will very soon be dumping these nVidia coasters in favour of more reliable Intel chipsetted boards, and a software-based Srv2003 RAID that I can simply transplant is tempting. If, however, the performance will be so bad as to make it unusable, then I'll have to come up with another option.
 
If you're watching videos, then a software-based RAID5 setup is probably not going to be that bad. However if you're running databases, encoding HD video, etc. then you want to stay very, very far away from it.

The problem you have is, if you RAID the drives in Windows then you'll have to break the array when you get a proper RAID card. Will you have a way to back everything up?
 
If I may ask, why?

See above post :) I'm so done with nVidia and their chipsets. I could reel off a list of whinges and problems that total a lot of inconvenience, weeks of downtime, and a lot of swearing, but that's a bedtime thread for another night children :) And it's not being unlucky with the board, as I bought two at the time; one for my games desktop, one for the server (with the intention of having a backup board if one broke to recover data through), and each of those boards has been RMAd by BFG more than once, so I have had several boards and severe problems with each when trying to do anything more advanced than having a simple, games playing system. Even that is taking a bump now too; on old BIOSes my E6600 would get to 3.6 without breaking a sweat, and 3.8 on a cold day ;), and now it'll just about make 3.4. Not very good at all.
 
If you're watching videos, then a software-based RAID5 setup is probably not going to be that bad. However if you're running databases, encoding HD video, etc. then you want to stay very, very far away from it.

The problem you have is, if you RAID the drives in Windows then you'll have to break the array when you get a proper RAID card. Will you have a way to back everything up?

Just. I've upgraded my NAS to have enough space to back everything up to. It took days to copy, but it finished backing up this morning. I do do some video recoding, but it's never HD.
 
First off then, ditch the nVidia board. If you can't get the PCI card's BIOS to boot then you'll likely have the same problem with a RAID card.

If you have any intention of getting a decent RAID card in the future (which from what you said, you do) then make sure your new motherboard has a spare UNIVERSAL PCI-E 8/16x slot. These are different to PCI-E slots intended for graphics cards (even where they have 2-4 of them).

With regards to whether to get the Adaptec 3805 or a high-end Areca, I'd say go for the Adaptec. It's faster and likely cheaper and hopefully doesn't have shoddy customer support like Areca.

I've seen people get around max 15MB/s with onboard RAID5 with drives that can do 90MB/s+ on their own so I've stayed away from it personally.
 
Cool, thanks for the advice. I didn't realise there were PCIe slots specifically for graphics. After harassing nVidia into letting me talk to someone technical rather than a telephone jockey, I got the impression that PCIe 8 and 16x slots would do gfx and i/o, considering only the hardware, but things like bootROMs and drivers may not like it. 15MB/sec is still just about usable but pretty poor seeing as HD Tach gives me an average of 95.2MB/sec read with only 2% CPU usage on this current array. I'll wait a few weeks and splurge on a decent Adaptec then :)
 
Yeah I only found out about universal PCI-E slots recently. Apparently it's down to how the motherboard manufacturer implements the slot. The ones which share bandwidth with the primary graphics slot or support crossfire/SLI tend NOT to be universal PCI-E slots and hence may have compatibility problems with high-end RAID controllers which are sensitive to that.
 
OK, so looks like I'll be looking for a proper server or workstation board or summint rather than a desktop or gaming board I can force to do what I want.
On a related note then, I don't suppose you know how Adapted define 'JBOD' do you? I ask because I've used RAID cards that support JBOD and their 'JBOD' has been spanning across discs, so e.g. Three 300GB HDs in JBOD would be 1x900GB disc presented to the OS. Other cards with 3x300GB HDs in JBOD will present 3x300GB discs to the OS.
 
I'd only be guessing, sorry.

Might be a good time to send Adaptec an email with that question. Would be nice to know how quick they respond to queries as if you have post-purchase problems with a RAID controller, it will not half annoy you if they take ages to reply.

If you want 3 x 300GB drives to appear as 1 x 900GB in the OS, you'll deffo get that with RAID 0 anyway, but if that's not what you want then you could always direct-attach those drives to the motherboard. Using drives with no RAID or spanning on an expensive RAID controller is a bit of a waste.

I'm still running tests with my P5Q Deluxe (which has 2 graphics PCI-E slots and a 3rd universal one). Will let you know how it goes because it is an excellent desktop board.
 
To answer your original question concerning the speeds of Software (Windows Raid 5) I found it sufficient. I was getting over 300 Mb/sec on reads and about 60 Mb/sec on writes. The writes is where you suffer without the XOR processor. Of course the windows raid array is fully transferable between boards, where as hardware raid calls for a retention of the specifice card it was created on. The other option is to look at the universal controllers, which are hardware supported software, have a look at VST Pro, which is a software front end to a multitude of cards, and will enable you to create a raid 5 array accross differeing hardware controllers. Might suit your situation best.
 
300Mb(it) is around 37.5MB/s so you loose a fair bit on read speeds too but should cope (just about) with high bitrate HD content streaming. With 60Mb(it) being around 7.5MB/s, writes are painfully slow compared to a decent RAID controller where write speeds can be amazing.

Did you do any read/write comparisons when reading/writing a few large files or lots of small files? A single hard drive will slow down a LOT when dealing with lots of small files, I wonder how it affects an onboard RAID5 array (my guess even worse).
 
I'll give HD Tach a run over the array and post up some results. Read and write for the nVidia RAID, software RAID5, and whatever card I end up getting.
Pandobear: VST Pro sounds interesting. I'll have a look, thanks :)
 
Ah...
Good thing: Ciprico's VST Pro software does indeed look brilliant and very cheap considering what it does.
Bad thing: Ciprico went bankrupt a month ago.

I'll bite the bullet, wait with this nVidia RAID another few weeks and get a proper Adaptec card. I've sat with unusable drives for nearly six months now, what's another one? Thanks for everyone's replies, though.
For future reference, when I do get this card, I'll benchmark the array for nVidia, S2003 RAID5, and Adaptec and post results.
 
Oh, and Adaptec just replied, so that's a turnaround time of 26hrs 11mins :) And apparently the Adaptec cards do do 'true' JBOD where 1 physical driev = 1 logical drive presented to the OS, not any kind of spanning.
 
Back
Top Bottom