Agreed, RAID0 will give you better bandwidth from the disks but that is wasted if the controller is strangled by the bus it is connected to. If, like the OP, you don't have onboard RAID support then you're limited to using add-in cards which mean bus limitations. PCIe not only provides more bandwidth but is a more futureproof solution in a world where PCI slots may soon be a thing of the past.michael baxter said:If you want improved bandwidth, you may be better going the RAID 0 route, and get yourself 2 or more identical drives.
rpstewart said:The PCIe cards are certainly worth looking at as a way round the bottleneck of the PCI bus, a 1x card (the 2 port one) can run up to 250Mb/s whereas the 4 port card is 4x and hence can go to 1Gb/s. The problem with the 4x card is that naro would have to use the 16x PCIe slot thereby removing the possibility of having PCIe graphics although for a server the onboard stuff should be fine. There is another slot on the board which looks like a 1x PCIe slot but it's not mentioned in any of the documentation. If it is PCIe then it should happily take one of the 2 port cards.
michael baxter said:The bottleneck of any hard disk subsystem are the drives themselves, I would be surprized if there is any performance benefit between the PCI and PCIe interfaces in this case.
If you want improved bandwidth, you may be better going the RAID 0 route, and get yourself 2 or more identical drives.
I'm using a 4 drive RAID 0 setup with a gigabit NIC in my domestic media server, and it provides lightning performance. In fact, faster than a single locally attached drive!
Best wishes,
Michael
michael baxter said:Hi RPStewart,
I wouldn't disagree with your points, but my 4x HD RAID 0 setup is on a PCI card (RocketRAID404) and I get a smidge short of a 4x bandwidth improvement over an identical single drive. However, the drives are about 3 years old, so for the latest drives your point may be true.
Sure, the PCI is on it's way out, I wouldn't disagree. But does it matter in this case?
Happy New Year to you!
Michael
The 133Mb/s bandwidth is for the whole PCI bus. That is to say it's shared between the three slots AND any other onboard devices which are attached to the PCI bus.naro said:Also, I would like to ask if the bandwidth of each single PCI slot is 133MB/s or all the 3 PCI slots' combined bandwidth is 133MB/s?
rpstewart said:The 133Mb/s bandwidth is for the whole PCI bus. That is to say it's shared between the three slots AND any other onboard devices which are attached to the PCI bus.
rpstewart said:If you're only going to be using it as a media server then it's unlikely that you'll be pulling data at full speed off multiple drives so we might be splitting hairs here. There's nothing technically wrong with running a dozen HDDs all on the same PCI bus, it just there are now better ways of doing it if you need absolute performance.
michael baxter said:Naro,
If you're after low-power and want to go for a plug in card, you may want to check it supports HD power-down, many of them don't.
By the way, the card I use is a PCI based card, which supports upto 8 PATA drives. I use 4 drives, with one on each IDE channel.
Out of interest, which OS are you going to be running on your media server?
Michael