Inspired by the series of articles by David Hunt I too decided that I fancied having a crack at a super fast home network without spending a fortune. 10G Ethernet cards are still £200-300 a pop on eBay which makes even a simple PC-to-PC connection quite expensive. On the other hand, older generation SDR and DDR (10Gbps and 20Gbps respectively) InfiniBand adapters are available for a fraction of the cost. The downside is the extra hassle to set it up and the restrictions with cabling (very short and a bit pricey or useful length and very pricey).
So to whet your appetite here's a benchmark I just took:
To achieve this I've used:
The cost of the cable I had to buy was a killer, but cheaper cables are available! The trouble I have is the distance between the server and the workstation. Although it's only about 13 metres and some 15m cables are available, the cost difference between 20m and 30m is so little it makes more sense to get the longer one and have some options down the road. There is effectively zero signal degradation over fibre (so the extra length doesn't matter) whereas the short copper cables suffer quite quickly and it's rare to find them even as 'long' as 10m.
As David explains in his articles, InfiniBand works using RDMA which basically means the adapters read/write directly to the system memory in each connected machine, completely bypassing any protocols that slow things down. However, there is currently no way to achieve this in a Windows only environment (such as mine) and you need a Linux system to create the SRP target to enable the true high speed RDMA connectivity. Us Windows folk are stuck with IP over InfiniBand (IPoIB) which emulates TCP/IP and allows all the normal networking features. The downside is that it's all handled by the host CPU so a lot of the benefit of the 10Gb adapters is lost - for now.
So at the moment I'm quite happy to have 'only' 400MB/s sequential read as well as some other pretty good numbers. In the future I will probably create a Linux system, but at least there is headroom there if I find that the current speed isn't sufficient. My next step is to buy some disks for a RAID array on the server as the RAM disk is only for testing and proof of concept.
Also worth noting, Windows Server 2012 and SMB3 include support for RDMA on Mellanox's latest HCAs, but although these run at 40Gbps+ they are the same cost if not more than 10G Ethernet adapters. Microsoft and Mellanox look unlikely to add support for older (cheaper) IB adapters unfortunately, so Linux is the way to go to get the best speed possible at the moment.
Hope that's interesting and/or useful![Smile :) :)](/styles/default/xenforo/vbSmilies/Normal/smile.gif)
So to whet your appetite here's a benchmark I just took:
To achieve this I've used:
- 2x Mellanox InfiniHost III MHEA28-XT dual port 10Gbps (£45 each)
- 1x Zarlink ZLynx 30m CX4-CX4 20Gbps fibre optic cable (£170
)
- 1x Dell PE2950 (2x E5160, 12GB) running Windows Server 2008 R2 and Microsoft iSCSI Target, and a 3.5GB RAM disk
- 1x Phenom II 965 workstation running Windows 7 x64 and Microsoft iSCSI Initiator
The cost of the cable I had to buy was a killer, but cheaper cables are available! The trouble I have is the distance between the server and the workstation. Although it's only about 13 metres and some 15m cables are available, the cost difference between 20m and 30m is so little it makes more sense to get the longer one and have some options down the road. There is effectively zero signal degradation over fibre (so the extra length doesn't matter) whereas the short copper cables suffer quite quickly and it's rare to find them even as 'long' as 10m.
As David explains in his articles, InfiniBand works using RDMA which basically means the adapters read/write directly to the system memory in each connected machine, completely bypassing any protocols that slow things down. However, there is currently no way to achieve this in a Windows only environment (such as mine) and you need a Linux system to create the SRP target to enable the true high speed RDMA connectivity. Us Windows folk are stuck with IP over InfiniBand (IPoIB) which emulates TCP/IP and allows all the normal networking features. The downside is that it's all handled by the host CPU so a lot of the benefit of the 10Gb adapters is lost - for now.
So at the moment I'm quite happy to have 'only' 400MB/s sequential read as well as some other pretty good numbers. In the future I will probably create a Linux system, but at least there is headroom there if I find that the current speed isn't sufficient. My next step is to buy some disks for a RAID array on the server as the RAM disk is only for testing and proof of concept.
Also worth noting, Windows Server 2012 and SMB3 include support for RDMA on Mellanox's latest HCAs, but although these run at 40Gbps+ they are the same cost if not more than 10G Ethernet adapters. Microsoft and Mellanox look unlikely to add support for older (cheaper) IB adapters unfortunately, so Linux is the way to go to get the best speed possible at the moment.
Hope that's interesting and/or useful
![Smile :) :)](/styles/default/xenforo/vbSmilies/Normal/smile.gif)