10gb Network

I did intially have issues where i was only getting a max of 11GB/s speeds between PC and NAS , through help on here i switched out the cat 7 cable for a CAT 6 and now i get 120 MB/s

Your numbers look wrong. Assuming the 11GB/s is actually supposed to be 1Gb/s, then that translates to circa 120MB/s which is what you state later eg you have the same number before and after the change. 10Gb/s should be hitting 800MB/s or above depending on the protocol or you’ve wasted your time/money and/or made poor server/client choices. I’ve always maintained that NVMe drives and high end CPU’s are a requirement for 10Gb, AHCI SSD’s will limit you to sub 600MB/s and mechanical drives just can’t keep up.
 
The 11GB/s was almost certainly meant to be 11MB/s. It was a Gigabit connection that was dropping down to 100Mbps because of a dodgy cable.
Most definatley a typo , and i do have a very high end pc which would be able to take full advantage of a 10gb network if i decide to go down that route , but as of now i am thankful to the guys in helping me go from 11 MB/s to 120 MB/s by just changing a cable :D
 
I'm about to order the bits needed to take my equipment in the study to 10gbe. Can someone just confirm this all looks okay?

Switch
4 x NIC

In terms of the equipment it is as follows:
  • Synology DS1618+
  • i7 9700K based server
  • AMD 2700X PC
  • AMD 2700X PC
They will all be in the same room and within 3m of the switch.
 
Yeah, it will be okay for the Windows 10 PCs, should auto detect and away we go. The NAS does not officially support it but plenty of people saying they have it working. I'd prefer a switch that has rear ports for neatness but cannot see any for around £100.
 
I've just set this up using a couple of dirt cheap connectx-2 SFP+cards at home (£30 for the pair and cable off the bay) - one in my NAS - a N54L running Ubuntu 16.04 and RAID5 using mdadm.
Other is in desktop running Windows 10.

Connected PC to the NAS directly using a DAC which came bundled with the cards, removing the need for a 10GB router but would go for the mikrotic if I needed one.
Both PC and NAS are also connected via a 1GB switch as there's other stuff which needs connectivity to the NAS so needed to think about how storage was accessed. Quite simple in the end - just a case of setting up a network drive on the PC using the IP for the 10Gb NIC in the NAS.

Performance has improved over the network - which previously saturated the 1GB network but is now only bottlenecked by the storage on the NAS (3 disk raid 5 so read speeds have increased a lot).

Actual network performance I see testing with iperf is about 5gbps in one direction and 1.8gbps in the other - so think that's a limitation of the CPU in the N54L or possible a configuration issue. Haven't investigated much further given the storage is the bottleneck - so that's the next task.
 
I've just set this up using a couple of dirt cheap connectx-2 SFP+cards at home (£30 for the pair and cable off the bay) - one in my NAS - a N54L running Ubuntu 16.04 and RAID5 using mdadm.
Other is in desktop running Windows 10.

Connected PC to the NAS directly using a DAC which came bundled with the cards, removing the need for a 10GB router but would go for the mikrotic if I needed one.
Both PC and NAS are also connected via a 1GB switch as there's other stuff which needs connectivity to the NAS so needed to think about how storage was accessed. Quite simple in the end - just a case of setting up a network drive on the PC using the IP for the 10Gb NIC in the NAS.

Performance has improved over the network - which previously saturated the 1GB network but is now only bottlenecked by the storage on the NAS (3 disk raid 5 so read speeds have increased a lot).

Actual network performance I see testing with iperf is about 5gbps in one direction and 1.8gbps in the other - so think that's a limitation of the CPU in the N54L or possible a configuration issue. Haven't investigated much further given the storage is the bottleneck - so that's the next task.

This is basically where I started several years back on the 10Gb bandwagon with a lot of reading on SNB and r/homelab has a lot of resources from people who have done similar. You're right about your bottlenecks, the N54L CPU, mechanical array, AHCI SSD etc. You've scored the easy win with what you have got so far, but from here on in, it gets more expensive to scale up/out without replacing the N54L and moving to a switched solution to expand with additional clients etc. Moving to a switched solution offers other advantages (if done properly, eg not a single gigabit uplink), your 10Gb server can serve multiple gigabit clients at full speed, but not many home users regularly move a few TB's of data in a time sensitive manner or the budget to warrant it.

*edit* Forgot to add, my current solution is to virtualise, almost all of the speed/advantages without the physical infrastructure and heat/power/noise to deal with.
 
Last edited:
Back
Top Bottom