10Gbps home network, on the cheap. Sort of.

Associate
Joined
1 Dec 2005
Posts
803
Inspired by the series of articles by David Hunt I too decided that I fancied having a crack at a super fast home network without spending a fortune. 10G Ethernet cards are still £200-300 a pop on eBay which makes even a simple PC-to-PC connection quite expensive. On the other hand, older generation SDR and DDR (10Gbps and 20Gbps respectively) InfiniBand adapters are available for a fraction of the cost. The downside is the extra hassle to set it up and the restrictions with cabling (very short and a bit pricey or useful length and very pricey).

So to whet your appetite here's a benchmark I just took:

bjSbO.png


To achieve this I've used:

  • 2x Mellanox InfiniHost III MHEA28-XT dual port 10Gbps (£45 each)
  • 1x Zarlink ZLynx 30m CX4-CX4 20Gbps fibre optic cable (£170 :eek:)
  • 1x Dell PE2950 (2x E5160, 12GB) running Windows Server 2008 R2 and Microsoft iSCSI Target, and a 3.5GB RAM disk
  • 1x Phenom II 965 workstation running Windows 7 x64 and Microsoft iSCSI Initiator

The cost of the cable I had to buy was a killer, but cheaper cables are available! The trouble I have is the distance between the server and the workstation. Although it's only about 13 metres and some 15m cables are available, the cost difference between 20m and 30m is so little it makes more sense to get the longer one and have some options down the road. There is effectively zero signal degradation over fibre (so the extra length doesn't matter) whereas the short copper cables suffer quite quickly and it's rare to find them even as 'long' as 10m.

2hNSU.jpg
7YZ3O.jpg


As David explains in his articles, InfiniBand works using RDMA which basically means the adapters read/write directly to the system memory in each connected machine, completely bypassing any protocols that slow things down. However, there is currently no way to achieve this in a Windows only environment (such as mine) and you need a Linux system to create the SRP target to enable the true high speed RDMA connectivity. Us Windows folk are stuck with IP over InfiniBand (IPoIB) which emulates TCP/IP and allows all the normal networking features. The downside is that it's all handled by the host CPU so a lot of the benefit of the 10Gb adapters is lost - for now.

So at the moment I'm quite happy to have 'only' 400MB/s sequential read as well as some other pretty good numbers. In the future I will probably create a Linux system, but at least there is headroom there if I find that the current speed isn't sufficient. My next step is to buy some disks for a RAID array on the server as the RAM disk is only for testing and proof of concept.

Also worth noting, Windows Server 2012 and SMB3 include support for RDMA on Mellanox's latest HCAs, but although these run at 40Gbps+ they are the same cost if not more than 10G Ethernet adapters. Microsoft and Mellanox look unlikely to add support for older (cheaper) IB adapters unfortunately, so Linux is the way to go to get the best speed possible at the moment.

Hope that's interesting and/or useful :)
 
You actually only need the run Linux at the target end, i.e. the end with the storage. Unfortunately my PE2950 is a rev. 2 which doesn't support Directed I/O which means I can't pass through the InfiniBand adapter to a VM (which rules out running ESXi). Since I need to use the system as a Windows primary domain control and a few other things, I can't install Linux onto the base metal either.

What I am considering though is recycling my workstation kit which is AMD but does support pass through (using IOMMU), but I need to do a bit more testing with ESXi 5.1 to see how the onboard RAID is presented and whether or not it can be passed through or if individual disks can be passed through and RAID'ed in software with Linux. It would be a nice excuse to upgrade this system to an i7...

What I have to keep reminding myself at the moment is that what I have now is well in excess of 4x faster than 1Gbe speeds that I was getting before :)
 
Some new numbers using StarWind iSCSI target:

0i9f8.png


Quite a bit faster than Microsoft iSCSI target, this is probably as fast as I'll be able to go with IPoIB. LAN Speed Test reported 850MB/s write and 930MB/s read but real world file transfers over SMB capped out at around 130-140MB/s. Anyone know what LAN Speed Test is really measuring?
 
Nice, Personally I'm waiting for Light Peak/Thunderbolt PCI-E cards to arrive as I don't really have a need for 10Gb ATM, it'd be nice but I'm not doing anything particularly time sensitive yet. Have some vague plans to Build up a nice SAN at some point though, maybe ZFS with a decent amount of SSD cache and then I can put all my games and apps on it.
 
Good numbers! You can really expect pretty close ones with a spindle based target and a lot of a write back cache in front of it (spoofing) BTW.

Some new numbers using StarWind iSCSI target:

0i9f8.png


Quite a bit faster than Microsoft iSCSI target, this is probably as fast as I'll be able to go with IPoIB. LAN Speed Test reported 850MB/s write and 930MB/s read but real world file transfers over SMB capped out at around 130-140MB/s. Anyone know what LAN Speed Test is really measuring?
 
I've got 1TB of SSDs arriving tomorrow so we'll see how they get on in RAID0 - should be enough to saturate a 10Gb line ;)

By the way, when I switched to the StarWind initiator as well as target, the numbers... well they increased a bit

YrcSC.png


:cool:
 
I've been waiting to do the exact same with my NAS after reading the same article :)

Already have the cards - just waiting for a cable although mine is only costing £25 as I only needed a 3m length.

I was aware that IPoIB was pretty poor but didn't realise you couldn't run SRP between two windows machines so I'm very interested in how you setup Starwind. Is it the free version you're running ? Will it run between Windows 7 and Windows Home Server ?
 
Last edited:
Good man :D

Well after my findings I'd say IPoIB is not anywhere as limiting as I thought it was going to be, and clearly it makes a big difference what software is running on top of it. That said, I'm not really clued up on how NetDirect and Winsock Direct come into play because if SMB shares don't take advantage of those layers and StarWind does, that would go a long way to explaining the performance even with IPoIB.

Talking of such layers, make sure you follow the instructions in the OFED driver manual to get them installed.

I'm using the free StarWind target and initiator, just head over to here and here respectively. You'll need to register for a free licence for the target. They have an active forum so it's worth posting there if something doesn't work. My workstation is Win 7 x64 so I can confirm that end works, but I believe you should be ok with Home Server as well - they seem to pride themselves on the variety of systems you can run it on, and I'm 99.9% sure Server 2003 would be fine.

Let us know how you get on :)
 
Thought I'd try Starwind across my normal 1Gb link just to see how it works. Might be my lack of experience but I can't see how to setup a disk or folder as a target. RAW disks aren't supported in the free version. Could only get a RAMDisk working.

Is there a way to do this with the free version ?
 
Fantastic find and results - been banging on about wanting to get past 1Gb for ages but there seems to be very little appetite for it for some reason (odd given the name of this forum!) I'm lucky enough that I only need about 2m of cable so should be able to make a substantial saving - will definitely be having a go! Time to do some reading :)
 
Thought I'd try Starwind across my normal 1Gb link just to see how it works. Might be my lack of experience but I can't see how to setup a disk or folder as a target. RAW disks aren't supported in the free version. Could only get a RAMDisk working.

Is there a way to do this with the free version ?

With the free licence you are only able to create devices based on image files (.img), images that support snapshots and CDP, and RAM disks. I don't know if there is a performance penalty for using images verses raw disk/partition mapping.

LACP using 2 ports and 20GBps? [I am of course, totally kidding].

It's not out of the question ;)
 
Why stop at 10Gbit :)

Although LACP is only really useful if you have multiple sources accessing the host simultaneously.

Exactly, your benchmark wont change if your using LACP (much) as its more for benefit for multiple TCP/UDP sessions, depending on how the LACP agg group is set up. Still, awesome and wp OP, I doff my cap to you sir.
 
1) Version with flash being used as a cache should be available soon :)

2) Unfortunately you cannot use StarWind initiator in production. Microsoft does not bless monolothic SCSI port model it's based on (support from MS could be withdrawn).

I've got 1TB of SSDs arriving tomorrow so we'll see how they get on in RAID0 - should be enough to saturate a 10Gb line ;)

By the way, when I switched to the StarWind initiator as well as target, the numbers... well they increased a bit

[ ... ]

:cool:
 
I'm theorising how to make my setup work the way I want it.

I will have a 6 drive hardware RAID10 array which I want all clients in the house to access the same data contained on it.

Main PC connected directly by 10Gbit using Infiniband
Laptop, Media Player, Squeezebox connecting via switched Gbit Ethernet

So the NAS will have an iSCSI target consisting of the entire array space.
The main PC will attach to the target and create an iSCSi drive.
Then the NAS will also attach to itself (via loopback address), create an iSCSI drive and then share the folders out to the rest of the network (SMB or NFS)

Will that work ? No issues with multiple initiators writing to the same file as its just me using any of the clients.
 
Sorry if I'm completely off topic, but would this be possible using 2 4GB fiber channel cards? I got given a pair of cards and had no real use for them but this could prove an interesting project. Although this is way above my knowledge so would need to be pointed in the right direction!
 
Back
Top Bottom