Intel Gigabit NICs

Got a dual and quad port Intel pro card that were surplus from work but both have full height backplates, need a half height for my Microserver :(
 
Just bought two of these, one for my desktop, another for my server.

Network file transfer speeds jumped from ~90MB/s to 112MB/s

Pretty much fully saturating my network now, well impressed with them :)

These are the ones I went for EXPI9301CTBLK



When I install my 24 port switch, I'll get another 2 of them for my server and team them together to get 3Gbps :)


Old setup was 2x onboard realtek NICs

Just decided to upgrade my z68 board with one of these after reading your thread! Toodle ooh dire Broadcom
 
Last edited:
Not quite true with regards to having a managed switch (although I picked up a 12 port 3com for £50) as long as you are running Windows 2012. In the case of NIC Teaming, network traffic is balanced across all active NICs, providing the ability to double your available bandwidth or more depending on the number of NICs in your server. There are some catches you need to be aware of though. There are two modes you can configure: Switch Independent and Switch Dependent.

With Switch Independent, the teaming configuration will work with any network switch. This means you can use non-intelligent switches in your network and still use NIC Teaming because all of the intelligence of how outbound traffic is distributed is managed by Windows Server 2012. The downside is that all inbound traffic is sent to only one NIC and is not distributed between all active NICs. This works great for web or FTP servers with heavy outbound traffic.

With Switch Dependent, the teaming configuration involves getting your network switches involved in the NIC Teaming configuration. Yes, you will need to configure both the switch and the host to identify which links form the team. Windows Server 2012 supports Generic/Static Teaming, where you statically configure the links, or Dynamic Teaming using a protocol like Link Aggregation Control (LACP), which dynamically identifies which links are connected between the host and switch. Both modes allow both inbound and outbound traffic to approach the practical bandwidth of the team members because the team is viewed as a single pipe. This works great for any server that has heavy inbound and outbound traffic.

NIC Teaming is compatible with all networking capabilities in Windows Server 2012 with three exceptions: Single Root I/O Virtualization (SR-IOV), remote direct memory access (RDMA), and TCP Chimney. For SR-IOV and RDMA, data is delivered directly to the network adapter without passing through the networking stack. Therefore, it is not possible for the network adapter team to look at or redirect the data to another member of the team. TCP Chimney is not supported because the entire network stack is offloaded to the NIC.

Saying all that, it's very unlikely that bonding your interfaces will result in a speed increase, especially in Linux. Typically, even if you bond the interfaces successfully AND configure the switch to support the etherchannel), then you will still find that only one interface in the bond is used for each pair of source/destination TCP/UDP session.

So if you copy a 10Gb file from one server, then kick off another 10Gb copy to that server, assuming that you also have session based bonding, you'll see both cards maxed out. But crucially, the first copy will only consume ONE network card, not both.

This is certainly the case with Cisco's etherchannel. In fact, Cisco's etherchannel isn't even session based, it's source/destination based, so in the example above, you wouldn't even get a speed increase - your second copy would have to be to a completely different server before you saw both cards used. Perhaps you have a better switch that allows for the port channel to utilise both cards simultaneously in one TCP/UDP session but it would require some pretty funky arp/MAC manipulation and I have no idea if the bonding module in linux supports that. Perhaps a knowledgeable Linux bod could answer that one?
 
Got my two NICs on Wednesday and fitted them this evening. Even after fitting the one on my server the transfer speed increased by about 10 MB/sec. Fitting them on both ends and my transfer speeds ranged from 94.2MB/sec to about 102MB/sec averaging at around 97MB/sec I would say.

Very happy with these and for only £44 for a pair, what a bargain!
 
Back
Top Bottom