Daisy Chain Servers using Fibre

Associate
Joined
5 Dec 2011
Posts
2
Does anyone know if I can daisy chain servers together using fibre, to communicate between the servers quicker.

HELPPPPP!!!! I am at a loss, I think it should work, if I just bridge the cards, but some genius.
 
I am guessing the first place you want to start looking is your core network infrastructure and the hardware you are using, as they are clearly not meeting your requirements.
 
Thanks for that,

The incoming and outgoing connections are fine, its the comms between my backend SQL server. which is why I thought Fibre between the servers would make it better to reduce load on the switches.
 
I'm pretty sure you can't do that, Fibre channel needs a host and a target, the cards are hosts so you have two hosts.

Probably better using dual port GigE cards and crossover cables. Then manually set the IP addresses and then use the IP Address of the SQL server on the other machine instead of its host name.

You will still need a network link for "normal" network comms (AD, DNS etc) on each machine as well as the dedicated link.
 
I also don't see how directly connecting the servers will make it any faster than via a switch, as that's effectively what the switch is doing anyway. Provided they're both connected to the same switch.

Looks to me like you need to go back and make sure it's definitely network throughput that's holding you back.
If I had a penny for every software/SQL dev that told me the network was slowing their DBs down and it wasn't.... I'd be a rich man
 
The only reasons I can think of for doing this:

1) Your switches are under constant > 90% load and you want an alternative connection between the servers to lower the load to increase throughput to other systems

2) You are using SQL Server in some ultra low latency environment and/or have a lot of traffic constantly being requested/sent over these links

3) You want circuit switched as opposed to a packet switched network as a link between the backend to the frontend

You do not need to run down the fibre route IMHO, sticking with either 1Gbit or 10GBit ethernet will be cheaper to setup and maintain. Plus price/performance depending on the class of fibre you use - it's probably going to be faster unless you are paying top buck for a higher class.

I do advise caution though without a switch - primarily setting a secondary IP on both ends between the new connection (between servers) otherwise without a DHCP server it will do the whole 169.* auto assignment.

One more thing you may need to take in consideration most admins overlook when doing secondary connections is routing. If your switch has learned its route path than chances are it will still follow the normal route thinking it is the fastest path until it is aware of the secondary connection. I can only speak for Win2k3 server here but I had to enable "Client for Microsoft Networks" on the secondary connection before the server realised the path to the other server existed.
 
You will see a though put increase in going point to point between two server, in removing a switch you will remove the store and forward latency which although isn’t high does affect though put more than you would think. You can use a non-switching hub or a cut through switch.

Fibre is overkill unless you servers are more than the maximum distance for cat 5e/6 which is 100meters or shorter depending on patching.

Also though put depend on your NIC , PCI,PCI-X or PCI express since that affects bandwidth as well.

A large boost is given by using Jumbo frames taking the frame size to 9k or can be larger up to 64k depending on the network card , not all NIC's support this feature with the usual frame being 1518 bytes with a MTU (payload) size of 1365 bytes, obviously the greater the MTU then better.

If you have multiple 1gig ports you can also look if they support Link aggregation or teaming where you can load balance over two or more network connections between servers

10Gig Ethernet is also total overkill since the cost is high and few devices actually can actually run anywhere near 10Gig, this is mostly used between switches or extremely high performance servers
 
Back
Top Bottom