Recommend me some 10GBe copper NIC's

The SMB 3.0 setup looks like a winner tbh, if it's as simple as "needs windows 8 each end and as many ports as you want" that'll definitely do the trick for what I'm after.

If anyone can confirm on that I'll pull the trigger on some dual/4 port cards (or a bunch of USB3 to gigabit nic ports if that would work?) :)

Looking at the 4 ports - I think a dual port is a bit more affordable, I take it I could stick a dual port card in each box, link with "crossovers" and they'd also make use of the mainboard port through the hub connected back to my router (so ~110MB x 3)?

It doesn't seem to scale quite that well for me.
With my current setup (two Intel Gigabit CT's to an Intel Dual Port Gigabit PT) transfers average around 160MB/s. If I add in my onboard (Realtek and Marvel) Nics the speeds don't increase (though utilisation is pretty equal across all 6 Nics). Jumbo Frames and NIC tuning makes very little difference.
I think my relatively poor scaling is down to RSS support, CT's can do 2 queues, The PT doesn't support multiple queues, and the onboard's don't support RSS at all. I've ordered a dual port Gigabit ET for testing, pure laziness on my part as I have a couple more CT's in my ESXi box that I could have switched out with the PT.

Damn this thread, I've spent all evening playing around with NIC's and spending money rather than watching Game of Thrones after a hard weeks work :P
 
Last edited:
It doesn't seem to scale quite that well for me.
With my current setup (two Intel Gigabit CT's to an Intel Dual Port Gigabit PT) transfers average around 160MB/s. If I add in my onboard (Realtek and Marvel) Nics the speeds don't increase (though utilisation is pretty equal across all 6 Nics). Jumbo Frames and NIC tuning makes very little difference.
I think my relatively poor scaling is down to RSS support, CT's can do 2 queues, The PT doesn't support multiple queues, and the onboard's don't support RSS at all. I've ordered a dual port Gigabit ET for testing, pure laziness on my part as I have a couple more CT's in my ESXi box that I could have switched out with the PT.

Damn this thread, I've spent all evening playing around with NIC's and spending money rather than watching Game of Thrones after a hard weeks work :P
Looks like RSS is more or less a hard requirement (confirmed by your experience):
http://blogs.technet.com/b/josebda/...ature-of-windows-server-2012-and-smb-3-0.aspx

This sort of thread is what it's all about though, isn't it? Although GoT is pretty tough to compete with!!
 
It doesn't seem to scale quite that well for me.
With my current setup (two Intel Gigabit CT's to an Intel Dual Port Gigabit PT) transfers average around 160MB/s. If I add in my onboard (Realtek and Marvel) Nics the speeds don't increase (though utilisation is pretty equal across all 6 Nics). Jumbo Frames and NIC tuning makes very little difference.
I think my relatively poor scaling is down to RSS support, CT's can do 2 queues, The PT doesn't support multiple queues, and the onboard's don't support RSS at all. I've ordered a dual port Gigabit ET for testing, pure laziness on my part as I have a couple more CT's in my ESXi box that I could have switched out with the PT.

Damn this thread, I've spent all evening playing around with NIC's and spending money rather than watching Game of Thrones after a hard weeks work :P

Dang, nice to have "real life" experiences though :) Much appreciated :D

Will have a poke around and see what I can get for what price, shame the scaling kinda sucks though. That's got the second port barely adding 40-50MB/sec :(
 
I'm looking at 10GBE cards on... the bay place we don't mention.

Is there any reason a pair of 10GBE cards + whatever SFP modules/direct attach cables I want wouldn't do the job?

That setup seems doable for around £120 (depending on what card I can find with windows 7/8 drivers).

E.g. would Alacritech SEN3001EF cards be viable (that looks to be a 10gbe NIC with SFP module).
 
Last edited:
Direct attach cables are short, but if that works for you then yes. Mellanox Connect-X 2 cards have good Windows support (I use them). Can't remember what SFP modules I ended up with but they were cheap too.

Happy hunting.
 
Direct attach cables are short, but if that works for you then yes. Mellanox Connect-X 2 cards have good Windows support (I use them). Can't remember what SFP modules I ended up with but they were cheap too.

Happy hunting.

Top tip for the card to look for, cheers on that. Looks like I've scored 2 cards and 5m (plenty) cable for about £100 all told. Will see how I get on.

I guess SMB3 is still going to vastly improve how well these work? (Currently it's 2x windows 7 boxes so SMB 2.1) or is it more based on number of processor cores on each box (one's an I7, the others a pentium G830 so I guess that ones going to be holding me back somewhat)?
 
Last edited:
It would be interesting to see a comparison of the two Windows/SMB versions:

- Keep existing Windows 7
- Install both cards
- Test
- Upgrade the "server" to 8.1
- Test
- Upgrade the workstation to 8.1
- Test

I suspect a combination of all of the above, plus probably a fair bit of additional performance to be had from enabling things that may not be enabled by default (like the earlier article I referenced, where it was essential to enable RSS and set the number of RSS queues = number of CPU cores).
 
Just as an update, I had no significant improvement using the RSS capable Gigabit ET I bought :/ Still around 160MB/s. I've also found that Win 2012's native teaming hurts speeds a bit, I get around 150MB/s with that. Still, it offers plenty of other benefits and I don't need to move much data anyway.
Everything is working as it should be, SNMP monitoring of my switch ports show pretty equal utilisation over all four links, I was just hoping to see better scaling.
 
Last edited:
Back
Top Bottom