Which motherboard for 9800X3D?

It's 10x faster and I've become accustomed to the speed.



I got them cheap.
You can pick up used ConnectX-4 cards nowadays for £80-100 that support 10/25Gb using DAC or SFPs (if you wanted to use Ethernet). HPE 640FLR is one example. Why stop at 10? :D They even support RDMA for crazy fast transfer speeds with CPU offload.

You just need a board with PCI-E 3.0 x4 in the fifth slot. That's quite common.
 
Moar speed! :D

It's not just a question of NICs. There's the switches too and those aren't that cheap.
Nothing makes you buy faster switches like having faster NICs! That's the best benefit of the 10/25 NICs - they put you on the upgrade path towards 25 but still work at 10. This was their purpose.

I've now gifted you this information and you'll break one day. You need to setup RoCE using DCB for RDMA. Drop me a PM if you need some help when you give in. The transfer speeds will blow you away if you have NVMe storage on Tx and Rx!
 
You can pick up used ConnectX-4 cards nowadays for £80-100 that support 10/25Gb using DAC or SFPs (if you wanted to use Ethernet). HPE 640FLR is one example. Why stop at 10? :D They even support RDMA for crazy fast transfer speeds with CPU offload.

You just need a board with PCI-E 3.0 x4 in the fifth slot. That's quite common.
25gb you need a pcie3 x8 slot if I'm not mistaken.
 
Last edited:
Nothing makes you buy faster switches like having faster NICs! That's the best benefit of the 10/25 NICs - they put you on the upgrade path towards 25 but still work at 10. This was their purpose.

I've now gifted you this information and you'll break one day. You need to setup RoCE using DCB for RDMA. Drop me a PM if you need some help when you give in. The transfer speeds will blow you away if you have NVMe storage on Tx and Rx!

unless you have a large number of users that utilise the bandwidth this is just wasting money. Average user would be hard pressed to saturate a 2.5GB network let alone a 25GB, I just don't see the point for average user.
 
unless you have a large number of users that utilise the bandwidth this is just wasting money. Average user would be hard pressed to saturate a 2.5GB network let alone a 25GB, I just don't see the point for average user.
Of course it is. 10Gb is already daft. I was just fueling the OP's overkill desires.

25gb you need a pcie3 x8 slot if I'm not mistaken.
Yes, for 25Gb. The 10/25 NICs can run 10Gb at PCI-E 3.0 x4 and 25Gb at x8.
 
Last edited:
One thing of note, is that the X870E boards tend to have been designed and manufactured differently from their older X670E brothers. What might fail to work in X670E boards, could be stable on an X870E board (DDR5 128GB+ at 4800Mhz+ instead of 128GB+ at 3600Mhz only, etc).

As of such, if you're in a spot between something like the Asus Proart X670E and the X870E, there's a little bit more than just lane sharing and USB4 making up the difference. Also, the X670E was discovered to have a somewhat identical lane sharing issue but in an area that's less noticed by others in general use. So it's not exclusively a X870E thing that some stuff are lane shared by default.

As for 10g onboard, I personally went for it (have both the Proart X670E and the X870E) because then it means:

1. You can WOL the system with just the one connection (direct to 10g). Some add in cards have trouble doing this with normal setups and thus need another onboard NIC to wake up from first.

2. If your case has fan mounts on the top of the PSU shroud, you can fit these on for more cooling, without blocking the bottom PCIe slot that might have been in use by something else (like a 10g/25g card).

3. More stuff onboard means less need of extras add ons unless you absolutely need it.
 
For folks scratching their heads at why one needs 10G, it's for NAS/servers and local network transfers. The standard 2.5G nowadays will top out at below 300MB/s. 10G allows transfers faster than 1GB/s. For folks using network storage and NAS, the network is often a bottleneck for transfer speeds. I'm looking into building a NAS/server myself and realise how important network speed is for transferring the many terabytes of data I have.

For the question at hand, there aren't too many boards with built in 10G, only the top end Asus/Gigabyte/MSI boards which cost too much or the Asus ProArt X870/X670 (which are still usually pricey) have them built in. Otherwise your best bet is to get a board with an additional x4 slot at PCIe3 or better, which strangely is less common on the newer boards and stick a 10g NIC in it. If you want the USB-C ports and PCIe5 slot of the newer X870, then there's a lot of options. Really depends on what other stuff you'll want on the board.

If I had to choose a X870, it would either be the MSI MAG X870 Tomahawk or Gigabyte X870 Aorus Elite. Both have an additional PCIe4 x4 slot (and an extra PCIe3 x2/x1 slot). Plenty of M.2 slots with at least a couple gen 5. And just good all round connectivity at a decent value price. Heck, the MSI's built in LAN is 5G, not quite 10G, but still better than the 2.5G found on most other boards that don't cost an arm and a leg.

Else there are more affordable bang-for-buck options in X670 and B650.
I did my own 7800X3D build this year on an Asus Strix B650E-E, which seems to tick your boxes, at a decent budget too. All the connectivity I could want at a decent price.

For 10G NICs, second hand is usually best value. I saw on MM that there's someone selling a server motherboard with a 2x 10G NIC (though it's SFP) for super cheap. Though even looking around, it may be possible to find some new ones for cheap, usually the cheaper ones are SFP though and not the normal CAT/ethernet cables.
 
Last edited:
I have an Asus TUF X670E-PLUS Wifi and have a 4090 plus an X550 in the bottom slot and works perfectly, giving two clear slots between the GPU and NIC which is plenty.

Also has three M.2 slots all of which I use.
 
@Quartz

Also worth mentioning that a lot of boards share some PCIe lanes between one of the M.2 slots and a PCIe slot. One of the reasons I went with this specific board is that the bottom PCIe, into which the NIC is inserted, doesn't share lanes with any of the M.2 slots.

Worth checking this on any boards you're considering.
 
Back
Top Bottom