PCI Bus Saturation - Would it happen in this scenario?

Permabanned
Joined
18 Jan 2005
Posts
1,108
Scenario: Require a basic File Server for home use with redundancy and preferably scalability, but want the stuff to be accessible as quickly as possible.

Objective: For as little as possible, build a file server utilizing my existing 2 x 160gb SATA1 Hard disk drives using an existing Compaq Deskpro old Pentium3 machine which is more than adequate for my needs.

The solution I plan for this is to use an old Compaq machine which has about 512mb ram, some Pentium 3 probably around 1000mhz and a couple of PCI slots. What I wanted to do was to buy a PCI raid Controller card that would slot into this machine and provide 4 SATA ports with ability to run RAID5. This type of card costs around £90. I want to do raid5 as then I can just add an additional cheap 160gb drives giving me 4 drives and 480gb of storage, plus its scalable as RAID5 on these type of controller cards can be upgraded and rebuilt when a drive fails instantly upon plug in opf new drive.

PROBLEM:

1: Because the above machine I might use does not have gigabit ethernet (and I want it to have it!) I would have to use a PCI gigabit ethernet card. The PCI bus is shared and operates at 133mb/s. The PCI Gigabit card would eat up 90% of this bandwidth during file transfers as gigabit ethernet utlizes 125mb/s. So I assume that if I plug in a PCI Raid COntroller card with 4 SATA drives on it, that when accessing/transfering files the gigabit ethernet card would bottleneck the file server and cause a huge slowdown?

2: If I plug JUST the PCI Raid controller card in with 4 SATA drives on it in RAID5, will this in itself even Saturate the PCI bus? SATA = 150mb/s transfer rate peak. Does each individual drive count towards the mb/s used in a Raid 5 config? i.e. 4 drives = 4 x 150mb/s peak = 600mb/s ?


Worried about this whole thing being pointless, but then why would such PCI SATA controller cards exist?


Any help much appreciated.
 
Associate
Joined
5 Jun 2005
Posts
987
Location
Leicestershire
sniper007 said:
Worried about this whole thing being pointless, but then why would such PCI SATA controller cards exist?

They exist for the R in RAID, Redundancy. It depends on the card you use (particularly for RAID5), but I'd have thought that you'd nevertheless be likely to largely saturate the PCI bus with 4 drives in RAID 5 (but, as I say, it really does depend on the controller - you might find that it does not in practice in your case.) I'd not be too worried about the difference between Gigabit peak throughput and PCI bus throughput, they're sufficiently similar that that's not going to make much of a difference in practice. (You don't in practice get the peak theoretical throughput on the PCI bus anyway, in practice you're more likely to get about 110-125Mbyte/sec or so...)

So, what you're doing is not pointless *if* your goal is to have high-availability with some level of performance increase. :) But, RAID5 really only works well IMHO if you get a *good* card for it... (which usually isn't the cheap ones...)
 
Last edited:
Permabanned
OP
Joined
18 Jan 2005
Posts
1,108
sniper007 said:
Worried about this whole thing being pointless, but then why would such PCI SATA controller cards exist?


Any help much appreciated.

Sorry I shoudl have made myself more clear - I meant that Im worried it will be unuseably slow. Its needs to be quickly accessable and have redundancy. I'm wondering why such cards exist in that they seem to easily premote saturation of the PCI bus. I would only do this if I could get by with using a card for no more than £100. Would never pay more than that for Raid controller card. I assume these ones have built in CPU that Im looking at due to the price and ability to do hardware raid5 alone.

I understand I could find a 66mhz PCI bus server board, but thats going to start adding to the cost again finding a server mobo and cpu etc.
 
Associate
Joined
5 Nov 2003
Posts
1,035
Location
Leeds
A £90 Raid 5 Card will use the CPU for all the write parity calculations and will be slow. I tried it with both an on-board SiL controller and a highpoint card and got 20mb/s writes (but 80mb/s reads). If you want sustained reads or writes more than the above figures it will cost a lot more.

An alternative is a Thecus N5200 HAS which runs a RAID 5 array which can give at 35-40mb/s over a Gigabit network.
 
Permabanned
OP
Joined
18 Jan 2005
Posts
1,108
Bomag said:
A £90 Raid 5 Card will use the CPU for all the write parity calculations and will be slow. I tried it with both an on-board SiL controller and a highpoint card and got 20mb/s writes (but 80mb/s reads). If you want sustained reads or writes more than the above figures it will cost a lot more.

An alternative is a Thecus N5200 HAS which runs a RAID 5 array which can give at 35-40mb/s over a Gigabit network.

I take it you mean the Processor of the computer/server as opposed to a built in CPU on the card there? Im starting to go off this whole idea now and stick to Raid 1. I just hate the capacity return compared to raid 5. i.e:

Raid 5:
160gb HDD x 4 = £160
Capacity = 480gb

Raid 1:
500gb HDD x 2 = £300
Capcity = 500gb

Ok so a controller card adds cost...obviously too much for my needs. Just would have liked the benefits of Raid5.
 
Associate
Joined
5 Jun 2005
Posts
987
Location
Leicestershire
sniper007 said:
Raid 5:
160gb HDD x 4 = £160
Capacity = 480gb

Raid 1:
500gb HDD x 2 = £300
Capcity = 500gb

Ok so a controller card adds cost...obviously too much for my needs. Just would have liked the benefits of Raid5.

How about a compromise: RAID 0+1 (or 1+0 aka 10):
160Gb x 6 = £240
capacity = 480Gb

Only 80 quid more (the price of your controller), then you only need a card that can do RAID0+1, which should be cheaper. Just a thought.
 
Back
Top Bottom