• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

PCI Slot and GPU question

Associate
Joined
2 Sep 2013
Posts
2,300
So I am looking to grab an NVidia GPU (RTX 3060 with 12GB VRAM) so I have access to CUDA so I can test out some other Deep Learning software (Main rig is AMD entirely). I was originally intending to grab this GPU with the replacement secondary system (HTPC/Gaming/parts backup for main/Now Storage) PC. But that may need to take a little longer given the AMD CPU/Motherboard issue at the moment (cost too).

The secondary rig is a:
CPU: i5 3570 (non K) running at 4Ghz boost
RAM: 32GB DDR3 1600Mhz memory
Motherboard: Gigabyte Z77 D3H 1.1
Storage: SATA2 500GB SSD (Boot), SATA3 RAID0 SSDs (2x 250GB for fast storage)
GPU: RX580 8GB (In Gen 3 X16 Slot, direct access to CPU)
Network: Gigabit onboard + 10Gb PCIe card (In Gen 2 x 4 Slot)
TV: LG CX

My questions are:

If I should swap the GPU and 10g card around (the 10g card maxes out at 6Gbps in the x4 slot, but goes full speed in x16 slot)?
If I grab the RTX 3060 12GB would it be too limited by the Gen 2 x 4 slot for gaming or CUDA Deep Learning duties?

Thanks all.
 
I don't have any benchmarks for it, but I'd imagine you'll get a noticeable bottleneck in gaming. I wouldn't be surprised if there's a hefty loss of fps. Not sure about CUDA.
 
I don't have any benchmarks for it, but I'd imagine you'll get a noticeable bottleneck in gaming. I wouldn't be surprised if there's a hefty loss of fps. Not sure about CUDA.
Thanks Tetras.

Yeah, I was just kinda hoping (if anyone knew) it wouldn't be too much (roughly how much of a drop) that I'd be looking at losing, and if I can let it roll for now by going the route of GPU into x4 slot and 10g into the x16 to max the network for fast storage. But as I am also doing the Deep Learning route with the CUDA cores, I may need it with full bandwidth... Although I guess my other option would be to keep the GPUs in the 3.0 x16 slot (including when switching to the 3060 12GB) and the 10g card in the 2.0 x4 slot, it's not 10g speeds going that route, but it's enough that I certainly won't mind much, and I can just leave the drives un-RAIDed too given it's roughly their max speeds anyway (530-550MBps, only another 200MBps more, which in grand scheme of things isn't that big a thing).

Will need to do a few tests and see how it is once I grab the card then in each slot and if the Deep Learning stuff is impacted on (for a smaller card like the 3060 12GB). Will report back, should provide some useful context for anyone in a similar situation looking for info about this.
 
3080 PCI-E Scaling - Roughly 25% compared to PCIe 16x 3.0

PCIe x8 1.1 in the chart is equivalent to PCIe x4 2.0

 
Last edited:
3080 PCI-E Scaling - Roughly 25% compared to PCIe 16x 3.0

PCIe x8 1.1 in the chart is equivalent to PCIe x4 2.0

Thanks, that's probably acceptable given the CPU probably can't even drive it that hard anyway. Although I'll still likely do a quick test in the x16 and then in the x4 afterwards to see if there's any major hit and by how much for gaming, and then (most importantly) when using it's cores for Deep Learning. If there's hardly any hit (for Deep Learning), all the better. If there is, I can always put the GPU back into the x16 and the 10g back into x4 and wait for a new setup to fully transfer everything over to and get the most of it then. But right now, it looks like I should just grab the RTX 3060 12GB and test to see what my best options are (for this setup). Thanks all. :)
 
Back
Top Bottom