Faster connection to Unraid server?

Soldato
Joined
20 Oct 2008
Posts
12,082
I'd like to improve the transfer speed between my primary desktop and my Unraid server. No problem if it's done with a direct secondary connection.

Does anyone have 2.5/5/10GbE working with Unraid?

Because of distance (c.10m) and cable routing (external), I'd prefer an RJ45 copper solution.

I'd prefer to spend < £100 and definitely no more than £200.

I was going to go cheap and cheerful with a couple of 2.5GbE NICs but the common RTL8125 based adapters don't appear to be supported on Unraid (yet).

5GbE is pretty rare and it's unclear about Unraid compatibility.

10GbE is either old, expensive, or both (and a bewildering choice).

The desktop has an x4 Gen4 PCIe slot (AMD X570) available. The Unraid server (Dell T20) has an x16 Gen3 slot free.
 
Unraid supports the usual 10Gb NIC's, X520/540 etc. and the usual Mellanox ConnectX-2 or 3 kit, two cards and either copper convertors or RJ45 NIC's should be doable around your budget, I wouldn't even bother with NBASE-T.
 
Last edited:
Doesn't Unraid effectively only have single disk speed? In which case I'd have thought 10 Gb/s would be a waste of money unless it's full of SSDs.
 
Doesn't Unraid effectively only have single disk speed? In which case I'd have thought 10 Gb/s would be a waste of money unless it's full of SSDs.

Not at all, you can saturate a gigabit link with a single mechanical drive sequentially, assuming you aren't calculating parity on writes eg write to cache pool, unassigned drive or reads. If for example you're using an AHCI SSD for cache/unassigned device, you have 500MB/s or so, NVMe is 6-7x that, so way faster than a single 10Gb link will manage sequentially. Ofcourse it's not that simple as the protocol used and client capabilities come into this, you can't just throw a 10Gb link in and expect it to work at near 10Gb speeds OOTB.
 
I use Asus Xg-c100c (10gb) in my setup at home - looks like they can be picked up for less than £80 £70 each now and should be supported in Unraid - they are PCIE3 x4 / RJ45
 
Last edited:
Not at all, you can saturate a gigabit link with a single mechanical drive sequentially, assuming you aren't calculating parity on writes eg write to cache pool, unassigned drive or reads. If for example you're using an AHCI SSD for cache/unassigned device, you have 500MB/s or so, NVMe is 6-7x that, so way faster than a single 10Gb link will manage sequentially. Ofcourse it's not that simple as the protocol used and client capabilities come into this, you can't just throw a 10Gb link in and expect it to work at near 10Gb speeds OOTB.

Fair enough.

I use Asus Xg-c100c (10gb) in my setup at home - looks like they can be picked up for less than £80 each now and should be supported in Unraid - they are PCIE3 x4 / RJ45

You can get an Intel X520 for around £30-40 or so.
 
My Unraid server has a pair of mirrored 1TB MX500 SATA SSDs, so there's bandwidth to be exploited.

I was looking at X540 adapters but wasn't sure how they'd manage in x4 slot (although I don't need anyway near full 10GbE bandwidth).

The only cheap X520 adapters I've seen are SPF. By the time you've added the transceivers, there's nothing saved.

I've got a few things to investigate now, including the Qnap adapters that I'd overlooked.
 
x540-T2 is generally the sweet spot for straight RJ45, the x550 supports NBASE-T as does the x710, but you’re going to spend a lot for no real gain.
 
Yeah this is trotted out in every thread about 10GBe - but it's not for everyone, it was first released in 2009, its PCIE gen 2, and only comes in x8 slot flavours.

They are cheap though

It’s trotted out because they are inexpensive, drivers are stable and they do what they are supposed to. Yes, it’s a PCIe2 card and physically x8, but op is doing P2P so a pair of 1T’s will be perfectly fine in a x4 slot with (roughly) 2GB/s of bandwidth available. Call me strange, but excluding ITX builds, I don’t think I have owned a board with less than 2 - and if we exclude mining - less than 3 x16 slots, though obviously some ram at x8 electrically.

Buying a NIC that can only do NBASE-T is just a con, firstly Realtek are still crap, they’re especially crap under anything not Windows, and Intel have screwed the first two stepping’s of the i225, so it’s actually quite difficult to buy a NIC that does 2.5Gbit and is worth using, except the X550 or x710 which both do 10Gb anyway.
 
And? It does line speed, there's stable drivers for nearly all operating systems and it's cheap.
Well - if you've only got a x4 slot free (as in the OP) it simply won't fit :confused:

I'm not saying it's a bad NIC or knocking it as a choice - it will be fine for many users, but it's not going to work for everyone?

Edit: Will be OK in this case if the Mobo has an open ended x4 slot (not sure if this would be the case if trying to use both ports but shouldn't matter here I suppose)
 
Last edited:
It’s trotted out because they are inexpensive, drivers are stable and they do what they are supposed to. Yes, it’s a PCIe2 card and physically x8, but op is doing P2P so a pair of 1T’s will be perfectly fine in a x4 slot with (roughly) 2GB/s of bandwidth available. Call me strange, but excluding ITX builds, I don’t think I have owned a board with less than 2 - and if we exclude mining - less than 3 x16 slots, though obviously some ram at x8 electrically.

Buying a NIC that can only do NBASE-T is just a con, firstly Realtek are still crap, they’re especially crap under anything not Windows, and Intel have screwed the first two stepping’s of the i225, so it’s actually quite difficult to buy a NIC that does 2.5Gbit and is worth using, except the X550 or x710 which both do 10Gb anyway.

The Asus NIC I mentioned is based on the Aquantia AQC107 chip and can do 1/2.5/5/10GB - support on the BSDs is carp - but Linux / Windows in OK.

Intel is still the best for NICs IMO (latest 2.5 excluded) but they are expensive new
 
From a compatibility and cost point-of-view, Intel X540 is looking like the most sensible option at the moment. Unraid compatibility for both the Intel and Realtek 2.5GbE options appears to be lacking and they both have very mixed reviews.

The PCIe slots I'll be using are all x16 physical length. The x4 electrical limitation in the desktop shouldn't be a problem for my use-case as long as it'll work (if it follows the PCIe rules it should).

I was just trying to work out how much of a limitation an x4 slot would be. Four lanes of PCIe 2.1 is 16Gbit/s? Enough for my needs but still short of what an X540-T1 would need running at full rate in both directions? Having the full eight lanes available (32Gbit/s?) seems to fall short of what an X540-T2 could theoretically consume?

Are there any firmware gotchas I should be aware of with Dell/HP/etc. NICs?
 
I was going to go cheap and cheerful with a couple of 2.5GbE NICs but the common RTL8125 based adapters don't appear to be supported on Unraid (yet).

I am using RTL8125 based 2.5GbE NICs for connecting my PC with Unraid, they're working absolutely fine.

Didn't need to do anything special also, just plugged them in and were recognized instantly on both systems.

Thought I had to use a straight-through cable because I was directly connecting two PCs without a switch in-between but turns out a patch cable is fine.

Unraid version 6.8.3 and Windows 10 20H2.
 
The Asus NIC I mentioned is based on the Aquantia AQC107 chip and can do 1/2.5/5/10GB - support on the BSDs is carp - but Linux / Windows in OK.

Intel is still the best for NICs IMO (latest 2.5 excluded) but they are expensive new

You nailed it in the first sentence, if it hasn’t got universal support and works across the board, it’s not a great NIC. Intel screwed the drivers on the i2x series under Linux a while back, I have to disable TSO/GSO to get more than 500Mbit out of one of my remote boxes on upload, down was fine, the older gen stuff runs perfectly. At the moment the closest thing to a good 10Gb NIC that supports NBASE-T is the x710, followed by the x550, neither of those are cheap - you have to pay more for support of arguably pointless interim standards.

I am using RTL8125 based 2.5GbE NICs for connecting my PC with Unraid, they're working absolutely fine.

Didn't need to do anything special also, just plugged them in and were recognized instantly on both systems.

Thought I had to use a straight-through cable because I was directly connecting two PCs without a switch in-between but turns out a patch cable is fine.

Unraid version 6.8.3 and Windows 10 20H2.

Unless it’s the 90’s still, MDI-X is a thing and you can use any cable.

The problem with Realtek is they’re the soft modem of the NIC world, the drivers are closed source and they don’t use things like hardware offloading to make hitting stated numbers easy. For it to be perfectly fine I would expect IPerf to show maximum throughput consistently and not significantly increase CPU usage, that’s not happening.
 
It depends on the context and what you really want to spend. Let’s assume you get 2Gb/s out of a pair of RTL8125’s, for £30ish each or somewhere around 8Gb from a pair of Intel x540-T2’s for £70 each. Is getting 4x the performance, decades of proven reliability (other than the i225/recent driver fiasco) worth 2.3x the money? If not and you are OK with Realtek and are happy with the performance (it might be the first RTL NIC that’s OK?) then it’s a reasonably cheap budget upgrade, if not, buy Intel. You could wait for the B3 stepping of the i225 to hit retail and hope they didn’t **** it up again, but I would expect them to be circa 2x the price of the Realtek’s at least, which puts you firmly back in x540-T2 territory. Either way no wrong answer. Oh and 6.8.3 onwards supports RTL8125 from what I read.
 
Back
Top Bottom