Cheapest setup for bonded gigabit?

Soldato
Joined
11 Jun 2003
Posts
5,293
Location
Sheffield, UK
Trying to engineer a way to move my current raid array out of my main box to a small homebuilt NAS but to get fairly nippy speed out of it.

In my box my array will do about... 250MBps

gigabit networking blessed by a priest with a tailwind and perfect everything is only ever going to reach 120ish so was considering bonding (link aggregation).

From reading around, pretty much all the dual port intel cards can do it, what would be the absolute cheapest switch I could use to get the double bandwidth setup?

Can you do crossover (so just card to card)? I could probably live with that and just build a super quiet server box.
If I can find them cheaply could I go a little crazy and do quad bonded with crossovers?

Plan would then be to add a static route to the hosts file on each machine so traffic between the 2 went via the bonded nics as both would also be connected to a regular switch via their basic/onboard nic.
 
Last edited:
I don't think link aggregation is going to help you very much unless there are multiple machines connecting. For a single machine you'll still be stuck at 1Gbps (I think).

I don't know what the cheapest switch option would be, but the HP 1810 series are fairly cheap.
 
Ahhh, node to node is still 1Gb?

What's THE cheapest option for beating gigbit speeds with single cards? The setup I'm looking at only has 2xpci-e (4x + at least) one of which my raid controller will be sat in.

Infiniband and point to point?

I see HP 452372-001 infiniband HCA cards for £30 inc postage on a popular auction site.
Those and.... 5m of cabling should come in under £100?
 
Last edited:
Infiniband crops up on here from time-to-time. If you have a search you should be able to find several different threads on the subject.

It certainly isn't a straight forward option, and you could spend a fair amount and get nowhere.

Is the new external enclosure going to be close to the machine you want to connect it to? Have you considered leaving the array directly connected (but in a separate enclosure)? If you're currently using the PERC5i in your signature it could easily be replaced with a PERC5e. Add a few cables and you'd be good to go.
 
I've posted about it before but the basic plan was to move my raid out of the main pc (saving PSU a bit as there's about...90w used for the raid array) to a low powered "server" box.

Put a fat connection between that box and my main PC, set routing between the 2 to only use the high bandwidth connection and then dump pretty much everything but my base install on the server box.
The drives in the server box would then also be available as NAS over the regular Gb port for all the other machines in the house.
 
You'd be limited to 1Gbps per connection, I don't know whether Windows uses multiple connections to grab files off an SMB share.
 
You'd be limited to 1Gbps per connection, I don't know whether Windows uses multiple connections to grab files off an SMB share.

Aye, seems you have to do quite a few extra tricks (round robin with some issues if packets arrive out of sequence etc) to get Gb+ with bonded links.

Seems the infiniband route while being slightly overkill is probably going to be about as cheap and a bit easier.
 
Most threads about attempts to implement Infiniband on the cheap do seem to resemble a very slow motion train crash.

It's a shame there was never an intermediate Ethernet option. A jump to even 2Gb would have been useful, and wouldn't have stretched the technology.
 
Hmmm, DOES look like the infiniband route is likely to be painful (done more reading).

Are there any 10gbe ethernet cards that come in under 100 each? Not after sites of course but models to hunt for would be nice if anyone knows of them.
 
Wasn't there a thread recently where someone used some sort of optical layout to get faster than 1GB Ethernet on the 'cheap' a few months ago? Is that Infiniband?
 
Back
Top Bottom