Advice on PCIe Lanes & NVME RAID arrays et al. for possible upgrade/new computer

Associate
Joined
11 Oct 2020
Posts
15
Current Motherboard / CPU: AsRock x79 Extreme 6 / Intel i7 3930k

Option 1: Biostar B550GTA / AMD Ryzen 9 3900x or 5900x

Option 2: ASUS ProArt X670E-Creator / AMD Ryzen 9 7900x

(Open to other options - can explain choices further if asked)

Hi, whilst I'm not new to building PCs, I am a bit out of the loop when it comes to current CPU/Motherboard standards, particularly when it comes to PCIe Lanes & NVME RAIDs. What I'm trying to understand is the following:

CPU manufacturers typically list the total number of PCIe Lanes for their processors. This can be for instance 24 (Ryzen 9 3900x & 5900x) or 28 (Ryzen 9 7900x). I'm also aware that usually x4 of these are assigned to Chipset making the former values 20 and 24 PCIe Lanes respectively. First question:

Are those 4 lanes usable at all by slots/devices lower down on the specifications or are they channelled to the chipset and there's no further usability?
- I ask because I notice in multiple specification lists, the specs for PCIe slots and M.2 slots are divided this way using what appear to be subtitles listing first the processor with associated components underneath and then the chipset, with lower spec slots following on the list:
https://www.biostar-europe.com/app/de/mb/introduction.php?S_ID=984#specification
https://www.asus.com/uk/motherboards-components/motherboards/proart/proart-x670e-creator-wifi/

If the chipset lanes are not usable, does that mean the lanes directly from the processor are to be used and subdivided by all PCIe/M.2 slots/devices on the motherboard?

If my graphics card (AMD Radeon RX 6600) uses only x8 PCIe lanes in a x16 PCIe slot, does that free up the remaining x8 PCIe lanes to be used by other devices in different slots? - and, if so, can those PCIe Lanes be utilised by devices that are listed under the chipset or is that a clear cut division (if the chipset lanes are even usable that is)?

I feel that my need to ask these questions is due to the sheer lack of available PCIe Lanes on modern motherboards and processors, unless you go for something like a Threadripper at enormous cost. My current motherboard's (AsRock x79 Extreme 6) associated processor, the i7 3930k, has 40 PCIe Lanes - a number, I gather, which was intended for graphics card SLI, but which has been nevertheless useful for modifying without worrying about hitting the PCIe lane limit. I have read explanations as to why this is the case, unsatisfactory to end-users like myself as they might be.


Next topic: Regarding NVME RAIDs, can such RAIDs be set up on the motherboard's separate M.2 slots or does it have to be done on a PCIe NVME adapter card utilising that slot's bifurcation (x8/x8, x8x4x4, x4x4x4x4)/ a Bifurcation Riser Controller?

Again, would these RAIDs have to be segregated along CPU/Chipset lines - if that is even the case? - and in such a case, would x4 from the chipset even be enough to set up a RAID with?

* Please note, I have never set up a RAID before though I have installed a single NVME drive attached to a PCIe adapter card in a PCIe 3.0 x8 slot in my present computer. I'm hoping to set up an NVME RAID in one of the possible future builds, though I suspect it may not be possible for the B550 GTA. I'm considering that motherboard because it is one of the last to feature a legacy PCI Slot. This is required for my Sound Blaster X-Fi Elite Pro which comes with a rare IO device, useful for recording music. The B550GTA would be easier for this. In the case of the more modern ProArt X670E I would opt for a PCIe to PCI adapter, though this would use up PCIe slots I may not have PCIe lanes for. It would also be tricky to find a place for it in my case and could result in me having to buy something like this full tower case to fit it onto a backplate beneath the ATX motherboard:
https://www.overclockers.co.uk/jonsbo-t59x-big-tower-secc-steel-hptx-pc-case-black-ca-024-jb.html with this *** competitor link removed ***
Alternatively it could be housed externally with these: *** competitor links removed ***

There are other considerations, but I don't wanna go on forever. The main questions are regarding PCIe lanes and NVME RAIDs. Thanks for any help you can provide.
 
Last edited by a moderator:
NVME RAID can be set up across multiple slots but in general NVME raid is not useful. This is because the drives are so fast that aside from getting great numbers in sequential benchmarks the CPU tends to be the limiting factor.

X570 boards can run 2x NVME drives at PCI Gen 4, B550 drives can run one at PCI Gen 4.

On X570 this does drop graphics speed to Gen4 8x

The chipset lanes compete for resources so if you plugged in a fast USB SSD and then copied files to an internal SSD using the chipset lanes your speed may be limited as both drives will be using the same resources.

It might be worth considering what you want to get out of your new build? With current NVME drives it is rare to be bottlenecked by the storage speed. If you use X570 or X670 you can use at least 2 SSDs at Gen 4 speed (and faster still on X670) which honestly is really really fast!

Thanks for the reply. Would the CPU being a limiting factor for an NVME RAID still be a limiting factor if it is setup on a PCIe 16x slot utilising 8 lanes (x4x4) with a bifurcation supporting adapter card?

On the Asus ProArt X670E the 16 CPU lanes for the two PCIe 5.0 slots are shared (sort of) and become x8 each when a second card is inserted. In my use case, my graphics card in x8 anyway, so installing a x4x4 NVME adapter card for a 2x 4TB RAID would have no impact on the graphics card's performance. I would hope it would also avoid the pitfalls of an NVME RAID using the M.2 slots.

Perhaps you could explain though how having them in a RAID is impacted more by the limiting factor of the CPU than it would as separate drives, does the RAID complicate matters for the CPU and impair the speed of its functions?


I haven't really looked into the X570, I did notice the Biostar B550GTA only ran one PCIE Gen 4, it's other being Gen 3. That and the fact the other x16 slot is on the chipset lanes and has only x4 functioning lanes - all the available chipset lanes, is making me lean away from the B550.


Yeah in a little further research into chipset lanes, I noticed that they share bandwith. In the case of the Asus ProArt X670E, the x2 M.2 PCIE 4.0 x4 slots are apparently daisy chained by the chipset, so I think the lanes are sort of cloned, but they also then share resources. I think they remain at x4 each, but can't be certain. What I do know is that installing any device into the PCIE 4.0 x16 (x2) slot will cause the first M.2 PCIE 4.0 x4 slot to revert to x2, halving the lanes - which makes sense as they only have x4 lanes to work with. I presume this means the second M.2 PCIE 4.0 x4 slot will also halve to x2, but I'm not sure.

With the chipset lanes's limitations in mind, I may choose to utilise those lanes for adapter cards solely:
1. USB 3.2 Gen 1 header adapter card in the PCIe 4.0 x16 (x2) slot
2. adapt the first M.2 PCIE 4.0 x4 slot with an M.2 to PCIE 4.0 x4 adapter card and install the PCIe to PCI adapter card here - the PCI chip would be installed on a vertical PCIE case slot on a supporting pc case (CM 690 II; Silverstone Seta D1)
3. adapt the second M.2 PCIE 4.0 x4 slot with a M.2 to SATA adapter (the Asus ProArt X670E has x4 SATA connectors and I'm presently using x5 - this will probably drop once I've transferred data to the new NVMEs and may only be necessary temporarily)

I of course recognise this setup would be very adapter heavy and I wonder how much this would effect performance, accordingly I'm avoiding using NVME hard drives on the chipset, but seems audio latency could also be a factor, especially if I'm putting an adapter in another adapter - may have to rethink exact placement.


I guess what I want to get out of the new build is, first of all, fast storage on maybe 2 or 3 hard drives visible on windows via two of the hard drives being RAIDed (an 8tb U.2 drive on a PCIe to U.2 adapter, itself installed on an M.2 to PCIe adapter would be what I eventually have in mind for the other M.2 PCIe 5.0 slot on the CPU lanes - I know, not as fast and very adapter heavy, is the only affordable singular 8tb option at present if you buy a second hand drive). Also aiming for relatively future proofed/up-to-date functionality in regards to PCIe, USB, memory and possibly processor if an AMD 9000 series comes along with compatibility for this Asus ProArt X670E board. Lastly decent gaming performance - I'm limited to x8 gaming anyway and generally prefer transportable cards, so not going for the out-and-out best cards out there (affordability also being a factor).
 
But why?


A PCIe to PCI adapter will be terrible for Audio latency

I presently have two (1 internal, 1 external) 8tb HDDs for my storage (I also have an 18tb external backup drive), I'm looking to replace them and have already bought the drives with which to replace one of these. As 8tb NVMEs still retail at £800+, singular drives aren't really an affordable option for me. RAIDed 4tb NVME drives, however, are. The main reasons I want to RAID are that: 1, it saves space in terms of slot usage if placed in a PCIe slot via a bifurcation supporting adapter card; 2, I'd like to have the info in the drives shared as there are some large folders I'd rather not have to split up; 3, I would like all drives in the RAID to be visible on windows as one singular drive for ease of use. I'm moving away from the idea of RAIDing the M.2 slots themselves and sticking with the PCIe adapter card idea, not sure RAIDing the boot drive with another drive is such a good idea. Instead the second CPU lane PCIE 5.0 M.2 could be twice adapted for use by an 8tb U.2 drive - admittedly adapter heavy, but affordable at 2nd hand prices. This would probably be a temporary measure, replaced once 8tb NVMEs become affordable. Once these would be installed I could look towards selling the HDDs, bulk I don't really want to have around.

Your second point is quite concerning to me. Other than the Biostar B550GTA, there are few other options for legacy PCI on the motherboard itself and means limiting yourself in other areas of performance - certainly no RAID possibilities. Of those that have successfully adapted soundcards with a PCIe to PCI adapter, they at least did not mention an audio latency issue - which is not to say they didn't have them. I will have to look into how much of a concern that will be. Any horror stories you might be familiar with?
 
Last edited:
But again why do you need so much storage? And why does it all need to be direct attached NVME?

Fully get your point about it being presented as a single drive letter or mount point, but why not invest in a NAS?

Because I already have 7.13tb of used space on one of my 8tb HDDs, some of this is made up of backup files and other files from previous installations I'm yet to sort through fully - not all will be needed, so won't quite be that large. The files I transferred to my 18tb external backup drive came to 5.53tb, so the true figure will lie somewhere between 5.53tb and 7.13tb. As you can see we're in 8tb territory, 4tb won't really suffice. The drive in question has downloaded music, self recorded music, office documents, a hell of a lot of photos, particularly photostitched panos plus both downloaded and self-recorded/edited videos. This sort of stuff takes up a lot of space. My other 8tb external drive is in another country (a consequence of limited luggage room and then the pandemic, is presently looked after by a friend) and is solely for videos, it also has a lot of space used up on the drive.

As you can probably tell my use case will involve being able to transport drives that are of low weight and small size on a plane in my carry on luggage. If I limit this to graphics card, processor, ram memory and hard drives, I can effectively take my computer with me (motherboard, case, psu etc. I would have to buy there). As HDDs are neither small nor lightweight (as well as typically being slow, limiting the speed of file transfers), I'm trying to avoid transporting multiple of them, so won't be investing in a NAS.
 
Last edited:
Seems unusual to me to have a need to RAID such a fast drive. I used to RAID my spinning HDDs for speed purposes but that's not relevant with NVME as they're so fast anyway. Or you'd RAID for the mirror aspects, but why not do that on a traditional HDD. I've not personally come across a situation where files I need to mirror also needs to have super-fast access and especially not for very large files - happy to be educated on the use case. For example, even in audio recording, the speed of a RAID HDD is plenty fast enough for audio file storage. I'd be inclined to have an NVME scratch drive for active projects and manually back up to a [RAID] HDD if there was a need for both speed and mirror - that would seem plenty for non-active project storage.

What's the special connector on the sound card out of interest?
As mentioned in my other response I need small and lightweight drives that won't take up room in my carry on luggage, as I plan to take that stuff with me - see the other response though for more on that.

Is there a particular downside, other than price, to RAID NVMEs?

Given we're in a situation where certain 4tb NVME (not typically the best) have reached parity with SSD and in some cases even HDD, it seems like the more logical option to go for. The recent drives I bought were the Samsung 990 Pro 4tb (£222.49) and two Lexar NV790 4tb (£165 each, opened never used). The Samsung felt worth it for the price to run as the boot drive, for my Steam Library and as a scratch disk for video and photo-editing content - I may reconsider this if not recommended. The two Lexars have similarly good and consistent speeds in endurance transfers and I will need to be transferring video and photo editing content to these. If you buy at sale time or find a bargain there isn't that much in it price-wise and with the weight and size difference, it feels like the only sensible option. Lowest price 4tb HDD is currently £78, so a little over twice the price, that seems a reasonable trade off to me.

Regarding the PCIe to PCI adapter device, the one I bought was on aliexpress, not a site I've bought from before but this particular model had plenty of reviews and photos of it in action from reviewers. Seemingly posting competitor links in here is forbade even though I'm pretty sure Overclockers don't sell these, anyway, here's the link minus the hyperlink/spaced out version:
https ://www .aliexp ress.co m/ite m/100500506 4528742.html
 
Last edited:
For Ryzen 7000 the CPU lanes are:
- 16 (up to PCI-E 5.0) for the graphics card / primary PCI-E slot.
- 4 (up to PCI-E 5.0) for M.2 NVME slot.
- 4 (up to PCI-E 5.0) for general purpose, or second M.2 NVME slot.

Thanks, this is a very useful and in-depth reply and concurs with another discussion on this particular board I read recently: https://www.reddit.com/r/ASUS/comments/174aoyi/proart_x670e_creator_pcie_bifurcation_support/

BuildOrBuy's video on the Asus ProArt X670E was also well delivered: https://www.youtube.com/watch?v=PMCam8rVxoY&t=702s

This does not include the 4 lanes that communicate between the chipset and the CPU.


The 4 lanes that are reserved for communication are not usable for any other purpose.


Yes, except that the chipset also has lanes (PCI-E 4.0 and PCI-E 3.0) which can be used for PCI-E/M.2 slots and other devices.

I think the distinction you made here about the 4 lanes being reserved for the chipset but the chipset also having lanes helped me understand that aspect of motherboards. If I understand correctly, the chipset also has 4 lanes unless I'm mistaken?

This is a kind of complicated question, so I need to break it down.

1. The number of lanes that the graphics card is wired for is generally irrelevant to what happens when you insert a graphics card into the primary slot. The reason for this, is because how the motherboard uses the PCI-Express lanes from the CPU (or from the chipset) is "baked in" from the factory. In other words: if you insert any card into the second PCI-E slot, it WILL take 8 lanes from the primary slot.

2. Does an 8 lane graphics card free up lanes to use for other slots/devices, or even chipset devices? NO, because the lanes are hardwired to/from the CPU, or to/from the chipset. In other words: since they are not dynamic, the motherboard is not able to go "oh hey, the CPU has 8 lanes free, let us redirect them".

Thanks for the graphics card explanation, I did eventually figure it out from the two links above, but you've explained it well here and confirmed what I suspected about baked in lanes.

My read of the manual is: the 2 chipset M.2 slots do not share lanes (i.e. you can use them simultaneously and there is no impact between the two of them, apart from the communication bottleneck that was mentioned earlier)

The first of the chipset M.2 slots: this shares lanes with the third full-length PCI-E slot, since when used it will steal 2 lanes from the M2 slot.

OK, this is generally what I've read as well, though the two M.2 slots not impacting each other beyond a bottleneck is new to me. What never seems to be addressed though is whether the use of the full length PCI-E slot has an impact on the second of the chipset M.2 slots. If there is no impact between the two M.2 slots, perhaps that would suggest not, though I do wonder how you can effectively get 8 lanes when there's, I think, only 4 lanes. Maybe though that would be correct as there are effectively 8 lanes when only the M.2 drives are in use, going from x4+x4 to x2+x2+x4. I suppose that bottom M.2 slot in this case would also be a good candidate for a hard drive unless the bottleneck is profoundly impactful.

I did want to ask, do you think my suggested use of M.2 to PCIe adapters, themselves having U.2 and PCI adapters connected might be a little overkill? I'm worried that might not be too good for the motherboard or the attached parts. There's also the M.2 to SATA adapter but at least that's only one adapter. The products in question, just un-hyperlinked links you'll have to reassemble and copypasta I'm afraid:
*** competitor links removed ***
 
Last edited by a moderator:
Failure of one will result in all data being lost, inability to easily upgrade to bigger capacity (as opposed to backing up a single drive, you need to backup the whole array), possible inability to read drives on a different motherboard.
I'm sure there are then more downsides.


Rather than obsessing over raid, why not just deal with the problem of organising the files properly, or look for a software solution to combine the storage as you require (whether that be as basic as sym linking folders, or something more complex like storage spaces or stablebit drivepool)

I still don't understand the need to carry 12tb of data (especially "disposable" stuff like games that can be redownloaded) with you when travelling abroad, or even why you wouldn't just use a laptop and remotely access your data? :confused:

Certainly is a big downside and will have to consider it, though with the cost of NVMEs considerably lower than they have previously been, at least when I bought mine, there wasn't a tremendous cost difference. Wouldn't backing up the whole array be necessary if using HDDs as well? Personally I've taken to manual backups after I lost some files when backing up previously with a certain program. If the new motherboard is the exact same model, would it still potentially not be able to read drives? I will certainly consider dividing up my files onto 4tb non-RAIDed drives if it's more trouble than its worth. 8tb NVMEs can't come down in price fast enough.

If there are programs that make separate drives look like one singular drive and have windows read it that way, I would be interested in that option and would like to know more. As for 'dealing with the problem of organising the files properly', that's my boulder to push to the top of the hill dude. In regards to anything 'the cloud' related, I thoroughly dislike having access to my files being dependant on my internet connection and having access to them slowed down on that basis. Tried it once, never again. I just want to have my own stuff and don't necessarily trust it to a data centre/big tech company.

If I have my games installed or uninstalled before travelling, what difference would it make if they're installed to a device the size and almost weight of a stick of gum? Why would I uninstall that? I've already bought the drives, I'm going to make use out of them. I don't just travel abroad too, I've lived abroad long term. I initially did only bring a laptop and a HDD in an external enclosure, but eventually got sick of being limited to it and bought a tower over there. The one I bought was rather limited though and not up to scratch with my computer I left in the UK. I did eventually seize upon the ideas of just taking the essential components across with me in a reinforced carry case, but would need an upgrade to be able to commit mostly everything to NVMEs. I would like to take the motherboard too, but seen as it would probably only fit in the hold luggage, I'm not sure I can safely take it across. Again, I hate 'the cloud' at least for my own purposes.
 
Last edited:
Back
Top Bottom