Recommend me some 10GBe copper NIC's

Indeed, you're looking at £500 for a setup, even with second hand parts.

If it's only got to go a short distance, Cheapest way to do 10Gb between two hosts is picking up some Infiniband adaptors and a CX4 cable and run IP over Infiniband. Be wary about driver support if you are on Server 2012/Windows 8 though - I think the Mellanox Connect-X 2 has decent support. Total setup can be done for somewhere between £100-£150. You might also get lucky on an auction for SFP+ (either direct attach or go fibre, transceivers can be had cheap) or CX4 cards.

Personally, since I moved to Server 2012/Win8 and couldn't use my infiniband cards any more, I've taken to just using SMB multichannel instead. It's super cheap to bung some Intel dual or quad port 1Gb cards in each end.

If thunderbolt pci-e cards that don't need special headers on the motherboard ever arrive I expect I'll try them out.
 
Last edited:
You can get a couple of Brocade 1020 cards pretty cheap, as you can normally find them varying in price from £25 - £100. However, you'll require a SPF+ modules for these cards. You can get Twinax cable but these can only be up to 5m in length, but they also vary again in price as I have seen these go from £40 - £150+

The above is all SPF+ based but if you want CAT6A cards, you will need to be looking at something like the Intel X540-T1 / T2 which are still very costly.

What I have noticed is SPF+ based PCI cards are cheap but the switches are very expensive, but the RJ45 based 10Gb PCI cards are very expensive but the switches aren't too bad.
 
Whats the difference between a Brocade SPF+ and the Intel CAT6A? What would be a good, reasonable priced switch for the Intel X540-T1 cards?
 
Whats the difference between a Brocade SPF+ and the Intel CAT6A? What would be a good, reasonable priced switch for the Intel X540-T1 cards?

Ones is a Small form-factor pluggable transceiver based and the other is RJ45 based.

The Intel X540-T1 are still quiet pricey for what they are, I believe you are looking at around £250 + depending on the provider.
 
Well, I was thinking of just going nic to nic so would only be the card and the cable.

As far as infiniband goes I believe it rather sucks at SMB? Looking to put together a 'nix raid together with samba/similar for windows shares.

Do we think 250MB via SMB on infiniband sounds doable?
To be honest, I don't care if it's windows server or 'nix at the server end, needs to be something that will work on windows 7 (ultimate if it matters) at the other end though. Also, no iSCSI or similar. It's going to be a NAS at the server end but also need to be able to see the shares over a standard gigabit port at the same time.

I'll make it simple then (I... have kinda asked similar questions before in here so, sorry if you are seeing them again. I haven't had particularly solid answers to everything yet).

Assuming:

Box1:
Intel based CPU + m/board with at least 1 free PCIe 4/8x slot
Running windows 7 ultimate

Box2:
Intel based CPU + m/board with at least 1 free PCIe 4/8x slot
Running ANY software/OS (though something windows based would be good, even better if I can squeeze a desktop OS)

What's the cheapest way to get 250MB/sec between them using SMB shares.

Box2 needs it's shares available on it's standard gigabit port too (so iSCSI targets etc or anything that makes the same drives/folders unavailable to the local OS simultaneously isn't going to fly).


I've seriously considered most things, I was looking at infiniband but it seemed to be slow (135MB/sec) without better protocols between the 2 (which would make it hard to do at the windows 7 end).

Considered:
Thunderbolt 2 boards both ends if I could find them cheap enough to be worth a minor upgrade to both boxes
USB 3 bridge cable (unfortunately doesn't exist)
Teamed NIC's (useless unless the aim is simply to have more bandwidth out of one of the boxes)
 
Last edited:
Would be interesting to know what the actual requirement is. 2Gbps (250MBps) seems very specific.

Windows 8 and Windows 2012 both do SMB 3.0 which introduces SMB Multichannel (which could give you aggregate bandwidth across multiple GigE links) and SMB Direct (SMB over RDMA, which I suspect would be AMAZING with Infiniband).

Lots of options, but without the actual requirement it's hard to give advice.
 
I basically just want to move everything but a decent sized SSD out of my main box and shunt raid duties etc to my server box. Could do with the drives being faster than basic gigabit though as it would likely end up having the extended part of my steam library on it.

So, yes, it's a "silly" application it's being used for but... why not? :D

250MB/sec seems a decent rate, my hardware raid 5 can shift about 350-400MB, figured a bit less would still be good but 110ish (gigabit) seems too slow.

I'll check into SMB 3.0

Edit: So if I read it right I literally just need a bunch of network adapters of any kind with windows 8 on either end and SMB Multichannel will automatically kick in and use several links at once?

Or does one end have to windows server 2012?
 
Last edited:
I guess my answer would be that spending significant amounts of cash on 10GbE cards is an expensive "why not" with very limited usefulness.

Would a couple of 4 or 8 Gbps fibre cards suit? You wouldn't be able to access the data over both network and fibre (simultaneously), but would be a lot cheaper (I'm assuming you can pick these up pretty cheaply used).
 
Never played with fibre, what would you suggest I look at?

ANYTHING that gives me a link at least twice gigabit ethernet using SMB and I'm happy.
 
Fibre doesn't act as a network device, it acts as a storage device. So the server end has to be able to present the storage, which I don't think Windows can do. So you'd probably be looking at Linux on the server. Apparently there used to be something called IP over FC, but it was killed off and neither Emulex nor QLogic currently support it.

Search the bay for 4gb fiber and you'll see cards for around £20. Make sure they have the SFPs in them (these are the modules that the fibre cable plugs into).
 
Ahhh, isn't that using some sort of initiator and target setup though? As far as I was aware, in that setup the server hosting the "target" isn't able to access the same drives. They are presented exclusively to the initiator system.

It's not QUITE my field so I may be missing a trick here.
 
Ahhh, isn't that using some sort of initiator and target setup though? As far as I was aware, in that setup the server hosting the "target" isn't able to access the same drives. They are presented exclusively to the initiator system.

It's not QUITE my field so I may be missing a trick here.
Yeah, that's what I meant by "You wouldn't be able to access the data over both network and fibre (simultaneously)". =(
 
The SMB 3.0 setup looks like a winner tbh, if it's as simple as "needs windows 8 each end and as many ports as you want" that'll definitely do the trick for what I'm after.

If anyone can confirm on that I'll pull the trigger on some dual/4 port cards (or a bunch of USB3 to gigabit nic ports if that would work?) :)

Looking at the 4 ports - I think a dual port is a bit more affordable, I take it I could stick a dual port card in each box, link with "crossovers" and they'd also make use of the mainboard port through the hub connected back to my router (so ~110MB x 3)?
 
Last edited:
Back
Top Bottom