Why is 10 Gb Ethernet still so expensive?

Soldato
Joined
1 Apr 2014
Posts
19,073
Location
Aberdeen
Gigabit Ethernet has been ubiquitous for a decade, so why is 10 Gb Ethernet still so expensive? Is it a matter of patents? Or is gigabit seen as 'good enough'? Or something else?
 
Its a pain isn't it. These days with files growing ever bigger as well. Some consumer motherboards now do come with 10Gbit but its few and far between, I would say we have another 5 to 10 years before it becomes the norm unfortunately
 
Lack of need and demand, gigabit at the time was overdue and people were crying out for transfer speeds within their home which were clearly not a bottleneck. Gigabit is easily fast enough to not be noticeable in a home environment or for single access ports within a corporate.
 
There are other factors to consider, I looked at getting this in my Dell T20 however the cost put me off as a 10G card is around £100, and upgrading switches which are few and far between is also costly.

The costly parts are going to be:
  • Switches - they need processing power to deal with that amount of data and then there would be added heat, which people in a home environment don't want because of the noise.
  • Cables - Cat6 is becoming more mainstream, but it is still costly and can be difficult to work with and fibre is very costly.
  • HDD limitations - can you get a HDD to write at 1.25GB/Sec? Would need to be SSD which would need to be several TB, is that capacity even possible and if so how costly would that be? Also with that how would the SATA controller, motherboard and even the CPU cope with processing it all.
  • Internet speed perhaps? Yes you can get 1GB/Sec it is very limited. Perhaps once Openreach get past the 330Mb/Sec along with their infrastructure becoming more able.
I'm sure I've missed a few things out but it needs all this to become capable and mainstream which will in turn increase demand which will in turn drive down prices. I can sort of remember when there was the 10/100 networking standard, a 5400rpm IDE HDD was about all you could get, consumer switches couldn't go over 100MB and internet was at a push 512Kb/Sec.
 
You’re focusing on the wrong problem. 10Gb is not expensive, scaling up and actually being capable of 10Gb can be.

In simple terms and using nice round numbers let’s call gigabit 100MB/s and 10Gb 1,000MB/s, I’m aware this is horrible technically, but I’m in a steam room and the point is the same. An average SSD manages 500MB/s in AHCI mode. Straight away you can see a problem, that SSD can’t write data fast enough for that level of connectivity, nor can it read data fast enough to satisfy a client capable of writing it. That ignores the CPU resources/RAM required to send and receive it that quickly, which as a quick google and 5 minutes on YouTube will show you is obscene relative to what you are doing. So let’s say you have 300GB to move, you need a machine with high end CPU, NVMe SSD (or a horrible AHCI SSD RAID set-up), a decent chunk of RAM and it’ll take 5 minutes and the kit costs you £30ish. Now what are you going to do? You’re unlikely to have 50TB of local NVMe storage to be doing that regularly (£180/TB ish as QLC won’t do, even with a fast buffer) or clients that will need to or be capable of 10Gb speeds, that suggests AHCI SSD storage in RAID (not actually that much cheaper, if at all) or mechanical drives. Again that’s not going to end well, so what exactly is the usage scenario in the home? Outside of a DC or high end HL set-up I can see an argument for A.V. work, but realistically local NVMe storage makes more sense usually, and again you’re talking about a tiny percentage of the market anyway.

So to do 10G between two machines is £30ish for a pair of cards and a cable, if you want a switch, budget £100 ish for a new Mikrotik, if you want to go old/noisy/inefficient with more ports feel free and pay more, but other than very limited scenario’s that would often be better dealt with using local NVMe storage in the first place, 10Gb is still a novelty for home users as the majority have no real need, let alone desire or the resources to make serious use of it, those that do have already got the kit and bills to prove it, along with ear defenders and bald patches from pulling hair out.
 
I'd love to know where a pair of 10Gb/s capable cards and cable can be had for £30.

The Asus XG-C100C 10GBASE-T PCIE card is £99.95 on here, looking elsewhere the prices all seem around the same. The Dynamode or TP-Link 1Gb/s cards are just under £10 each on OcUK, a cable probably a few ££ on eBay. If 10G were that cheap I'd buy a couple of cards today.

As for switches the Mikrotik CRS305-1G-4S+IN can be had for around £100 which is for 4x10Gb/s on SFP, the one ethernet is 1Gb/s.

Agreeing with what you put however, there isn't really much point to 10G in a home environment.
 
So to do 10G between two machines is £30ish for a pair of cards and a cable, if you want a switch, budget £100 ish for a new Mikrotik,

Those are auction-site prices, not OCUK, right? But I think you're underestimating the home environment. Take the example of two professional adults at home with a WSE 2016 box or NAS elsewhere in the house backing up their PCs and handling WSUS, as well as central file storage.

Even my own home setup here requires three switches.
 
10GB on RJ45 is expensive new and I don't see it coming down anytime soon TBH, not that many decent high port count switched for cheap in RJ45 either.

Best and most cost effective solution for greater than 10GB right now is probably the Mellanox cards, good comparison table here: https://community.mellanox.com/s/article/mellanox-adapters---comparison-table

Cheap used/refurbished cards are really easy to come by, as are used compatible switches.
 
@Quartz & @bledd Some interesting price differences there. 10G isn't something I'd done much research on, the last time I wondered about it was in a home setup wondering what my UniFi setup could do, when I looked into it (I never thought it was, just wondered) the fibre is still only 1G and so I did no more research as it was a thought to improve the link between 2 24x250w switches where I currently rely on link aggregation I'm not sure that does much in a home environment anyway, I only used it because the cable was run.

Is there different standards with SFP, and if so, will this be standardised at some point?

It's something I'll probably investigate when I'm bored in work, but after a bit of Google research the paranoia kicked in as the switches aren't cheap and so damaging them isn't something I intend on.
 
SFP is a standard, whether switch vendors choose to lock down the optics they want to enable is a different issue.

SFP+ is the 10Gbps version. Then you have QSFP+, SFP28, etc.
 
I don't use it at home yet, if 10gbe becomes cheaper, then I'll jump on it.

Hoping next motherboard had it built in.

If you NEED it for a home lab, then the devices above are the cheapest option for up to 4 devices.
 
Last edited:
I'm not using 10Gb currently ... but I'm close to being at the point when I should. I have a network switch which has 2x SPF+ ports and all three NASes can take 1 or 2 SPF+ connections (currently they are all running 2x 1GB CAT6 aggregated).

What I would need is a SPF+ switch and then some (vSphere 6.0/6.7) compatible cards for my Gen8 Microservers and my DL380G7.

Then point wouldn't be to get full 10Gb speeds but just to increase at least some over the current situation and allow for multiple connections at 1Gb. I may do it at some point if I have some spare cash and can work out what to get.
 
The issue is that 10 Gbit switching is only in datacenters at the core and distribution layers, access switches are still 1 Gbit (and even then I still see some 100 Mbit switches!) in pretty much every environment that I've seen. There are a few exceptions with workstation users but that's not a typical use case scenario. Also the majority of home users won't see any benefit for going to 10 Gbit. It's frustrating as I would and it's cost inhibitive at the moment.
 
Due to the infrastructure not being there here in the UK compared to other countries that have it going to nearly every home.

Most of our network is still on old copper cable and getting a lease line installed isn't cheap even with grants and other schemes to get it cheaper.
 
Those are auction-site prices, not OCUK, right? But I think you're underestimating the home environment. Take the example of two professional adults at home with a WSE 2016 box or NAS elsewhere in the house backing up their PCs and handling WSUS, as well as central file storage.

Even my own home setup here requires three switches.

Yes, they are server pulls form eBay or your local IT recycling outfit.

Please tell me more about the average home environment you think i’m not underestimating? The overwhelming majority of home users have ISP supplied routers and that’s it, no structured data cabling, no wish/need to VLAN other than ISP enabled hot-spots and think shoving a router in a cupboard/stupid location should cover everywhere they want with Wi-Fi. 10Gb is literally of no use to people like this. Your ‘professional person’ comment comment is just baffling, do ‘professional persons’ have a few PB of high speed NVMe and appropriate workstation/server set-up’s then yes, 10Gb sounds ideal for them, but the point is the same. Those that need or would meaningfully benefit from it are a tiny, tiny minority and if they’ve dropped that kind of coin on hardware, then they’d already have spec’d 10Gb. Those that are borderline due to multi client use will likely have link aggregation and large volumes of NVMe storage or multiple storage pools to justify it.

If you’re just in it for the internetz pointz, then that’s another story.
 
Due to the infrastructure not being there here in the UK compared to other countries that have it going to nearly every home.

Most of our network is still on old copper cable and getting a lease line installed isn't cheap even with grants and other schemes to get it cheaper.
OP is talking about home/small office use, not internet bearer.
 
Back
Top Bottom