Perc 5i - x16 slot or x8?

Associate
Joined
31 May 2009
Posts
191
Hi Guys,

Been phaffing about building the system in my signature.

Perc 5i question.

I have a UD5 motherboard. It has 2 x16 PCI slots and 1 x8 PCI slots.

I only have one graphics card, so I can use either the other x16 one, or the x8 one for the Perc 5i card.

Is there any benefit to be had from putting it in the x16 slot? Or would I in fact be better off just putting it in the x8 slot?

Thanks.

Stuart
 
The card is PCIe 8x so won't benefit from being in the 16x slot. However I would confirm how many PCIe lanes actually go to each of the slots on the board, there are a number of boards out there which have PCIe slots in which not all of the lanes are active (ie 16x physical slots which operate as 4x slots). If your 8x slot isn't a genuine 8x slot then it's worth using the 16x slot.

If all the slots will run at their full speed then just pick the slot which gives the best fit.
 
I thought the slot-limiting thing only happened if you were using all the slots in the other two PCI-e? So if you had graphics cards in both x16 then you might be limited to x4 in your third PCI-e slot, but if you had a graphics card in one slot and two PERC 5/i cards in the other two, the graphics would run at x16 and the PERCs would both run at x8.
Could be wrong though.

Anyway, the PERC will run fine at x4, I currently have mine running in a x1 slot (had to solder off the end to fit it in..) because I only have two x16 slots on my P5Q Pro, and they both have graphics cards in, so I was left with x1 slots. 250Mbytes/sec is perfectly adequate for my RAID5 needs.
 
Ok Thanks Guys.

Just had another read through the Gigabyte manual and it seems that one x16 slot always runs at x16 (the graphics card is in that). The other one runs at x16, unless the x8 is also being used, when it will run at x8.

Seems a bit daft. You would think they would run the x16 at x16 and the x8 at x8 all of the time. Instead of all the tricky dickery.

In any case, as long as I can get it running its all good.

Second quick question:

I have done the pin 5 and 6 mod (ie Painted nail varnish over the front) on the Perc 5i card. Do I also need to do that on the pins on the opposite side of the card? Or does it just need to be done to the front number 5 and six pins?
 
If you are prepared to spend as much money as you have on storage, why on earth are you running it in RAID5? Depending on exactly what you do, RAID10 will almost certainly give you about a 50% performance gain, maybe more. OK its 3Tb rather than 5Tb but unless you are doing something seriously data hungry it wont make any difference? If you are, you're probably fairly write intensive too so again, RAID5 is the wrong choice?

Makes no sense to go with RAID5, pretty much ever!
 
I have done the pin 5 and 6 mod (ie Painted nail varnish over the front) on the Perc 5i card. Do I also need to do that on the pins on the opposite side of the card? Or does it just need to be done to the front number 5 and six pins?

Try it and see. Worst that'll happen is it won't boot, you'll have to take the card out and paint over the other side.

If you are prepared to spend as much money as you have on storage, why on earth are you running it in RAID5? Depending on exactly what you do, RAID10 will almost certainly give you about a 50% performance gain, maybe more. OK its 3Tb rather than 5Tb but unless you are doing something seriously data hungry it wont make any difference? If you are, you're probably fairly write intensive too so again, RAID5 is the wrong choice?

Makes no sense to go with RAID5, pretty much ever!

Because it's 3TB rather than 5TB..

I do not need performance, I need storage space, and redundancy is nice. So I have a 4.5TB RAID5 array with 4x1.5tb disks; currently using about 3.5tb so I'd need six 1.5TB disks for RAID1+0, which would be considerably more expensive than having a RAID controller! 99% of all file operations are read-only from it, and speed is pretty much irrelevant anyway since it mostly extends to playing a film from it, but has the benefit over non-RAID that I can RMA a disk if it breaks and not suffer any downtime.

Since RAID is not a backup, I also have a copy of the data from my array stored on single HDDs, hence the large amount of storage. Also it's not been a single outlay of cash investing in storage, I've built it up disk by disk over the past couple of years, so it doesn't seem like such an expense; anyway, my current setup is a good balance of storage, reliability, and ease of use, which I wouldn't get by using RAID10 or any other RAID level.
:p

And, for the record, RAID10 is useful when you need something to have fast writes as well as reads, with redundancy. It's totally impractical for storage because of the high outlay in disks. Whatever size my RAID array grows to, I only lose one disk's worth of space to redundancy, whereas with RAID10 I lose 50% of my disk space to redundancy. No way is that economically viable when there are alternatives out there.

/rant, lol
 
If you are prepared to spend as much money as you have on storage, why on earth are you running it in RAID5? Depending on exactly what you do, RAID10 will almost certainly give you about a 50% performance gain, maybe more. OK its 3Tb rather than 5Tb but unless you are doing something seriously data hungry it wont make any difference? If you are, you're probably fairly write intensive too so again, RAID5 is the wrong choice?

Makes no sense to go with RAID5, pretty much ever!

I now have 7 x 1TB Samsung Spinpoint Disks and a Perc 5i Card.

I was planning on having 6 of them as Raid 5. Plus the 7th as a hot spare. Which would be 5TB of storage.

If I had them as Raid 10 then it would be 3TB, and I would have a spare disk I guess?

Given that number of disks. Would you actually notice any speed improvement?

Or would any speed improvement be in benchmarks, but not really in reality? (I could I guess buy another 1TB disk and then have 4TB?)

I was under the impression that the original raid 5 setup I had in mind would be pretty fast?
 
raid 5 carries a write penatly compared with raid 10

as I understand it when writing it needs to calculate the value of the parity bit, so in your case for every 5 bits of data it needs to do a calcutation
This is generally problematic for built in motherboard controllers as they need cpu power to perform the parity calculations, however hardware raid controllers (expensive one) typically have a processor dedicated to the array which performs the parity calculation leading to a lessened performance hit

if you will notice the difference between 250Mbps and 300mbps (rough guesses) then go with raid 10 otheerwise raid 5

if it primarrily a storage array and not a work array, eg rediculasly sized cad stuff I'd suggest raid 5, personal opinion
 

:o assumed you were talking at me, presume you were talking at OP?? :o

raid 5 carries a write penatly compared with raid 10

as I understand it when writing it needs to calculate the value of the parity bit, so in your case for every 5 bits of data it needs to do a calcutation
This is generally problematic for built in motherboard controllers as they need cpu power to perform the parity calculations, however hardware raid controllers (expensive one) typically have a processor dedicated to the array which performs the parity calculation leading to a lessened performance hit

if you will notice the difference between 250Mbps and 300mbps (rough guesses) then go with raid 10 otheerwise raid 5

if it primarrily a storage array and not a work array, eg rediculasly sized cad stuff I'd suggest raid 5, personal opinion

This. Obviously the PERC has an onboard processor for parity; I get speeds between 50-100Mbytes/sec write, mostly.

Reading from a RAID5 array is going to be practically as fast as from a RAID10 array - it will be reading from 5 disks at the same time from RAID5, and 6 disks at the same time from RAID10.
Writing is where you will notice a speed difference between RAID5 and RAID10. So as user1453 said, if you need write performance get RAID10, otherwise get RAID5.
 
Back
Top Bottom