Trunking Network Connection

Associate
Joined
28 Dec 2002
Posts
2,400
Location
Northern Ireland
Hi Guys,
After some advice i installed 3 extra network cards into our server in the hope it will increase the connection speed and thus increase data transfer speeds between the server and the editing client machine.

What is the best way to go about setting up a trunk? Do i do this on the switch itself? We are using an HP ProCurve switch.

I was also playing about with the bridge connections setting the network control panel but it reduced the speed from 1000mbps to just 100mbps?

Any advice is greatly appreciated as always

ace
 
Do the cards need to be same? The three PCI cards i installed are bsic cheap gigbit cards and are different to the onboard Lan.

Is there a good way of testing the connection to see if it actually working?
 
How fast is data being transferred currently? Is it only between the one server and one client PC?

How fast can the hard drives read & write data on the server & client? They could be a bottleneck.

What model is the server? What network cards? What model switch? What budget do you have?

Generally you would 'team' the NICs on the server to have one beefy virtual NIC.
 
Do the cards need to be same? The three PCI cards i installed are bsic cheap gigbit cards and are different to the onboard Lan.

Is there a good way of testing the connection to see if it actually working?

Whilst I've never tested a "fudge" of cheap and cheerful NICs, any server I've teamted NICs in uses HP (often Broadcom chipsets) or Intel Server NICs. They provide specific configuration tools for the NIC to team the adapters into groups.

What slots does the server have (PCIe or PCI-X)?
 
I am using the PCI-e port for another card so the NIC's are using PCI.

This is ultimately a test to see weather this will improve the connection between the server and the clients machines. If succesful we will invest in better cards.

The previous plans were to install a fibre card and have the connection between the switch and the server be a fibre connection but after speaking to a few people on here they suggested teaming NIC's would be a cheaper but similar option. Doing it this way also allows for a certain amount of redundancy as well i suppose.
 
If they are all the same chip, like intel or broadcom then you can use the latest drivers to team them. If your switches support 802.3ad then configue them for that, but if they are just dumb switches you just have to choose a software team that normally only offers higher throughput one way.
 
Is it a single client (what you've said suggests so), if it is then bear in mind none of this will help much due to the way LACP hashing algorithms work. LACP only really works properly when it's dealing with loads of traffic with multiple source/destination IPs/services, otherwise it's very difficult for the hashing to work effectively.

Teaming is best left for redundancy, active-active configurations are far more trouble than active-passive (I'm speaking with some responsibility for an estate of around 15,000 servers which I've banned active/active teaming from as it's more trouble than it's worth), if you need faster then faster media (10GigE) is more cost effective these days....
 
Since you've said nothing about your current throughput and disks i'm going to ask..

What is your current throughput and what kind of disk configuration do you have?
 
Current disk configuration has no Raid and is just simple disk shares. This will be changing in the current months ahead.

We have about 7 - 8 clients accessing the server at anyone time all for editing video footage, graphics and other information. The current setup only has 1 network cable connecting the server to the switch and this is were i believe the bottleneck is.

An example would be the following:
An editor using Final Cut cant play to stream of HD Footage at any one time, once it tries to play the second line of footage it just freezes and nothing plays.

If you guys don't believe using LACP will work in increasing network connectivity what other suggestions would you have?
 
I suggest figuring out the actual bottleneck, rather than apparently guessing & asking for all possible solutions to the symptoms.

Your iops at the moment is probably horrible.
 
Last edited:
If you guys don't believe using LACP will work in increasing network connectivity what other suggestions would you have?

Faster disk subsystem and/or 10Gig network card in the server (which will mean a switch with at least 1 10Gig port (like a Cisco 2960S for instance).

Do some more research rather than guessing though - what is network usage like when they're having issues, what does iostat (or whatever depending on OS) think about disk utilisation when there are problems.

What are you sharing the data with? SMB? You said final cut so I assume Macs, SMB on OSX (like any unix platform) isn't exactly a high performance solution, NFS would be better.

There are loads of options here, if you're talking HD streams then probably network is an issue if all 8 are working simultaneously but a 1Gig network will handle more than you think and disk performance is probably right on the edge too. Find the bottleneck and prove it rather than guess as has been said...
 
Some questions to answer to help you figure out what you need

* What codecs/compression are you working in (ie what is the data rate of one stream, some stuff like DVCPRO HD can be 100mbits or more)
* How many streams per workstation is typical (simultaneous)
* How many workstations (you've said 7/8 so that's answered)

Other than that, ditto all of BRS's suggestions.
 
Two quick suggestions:
If it's mainly a single editor using the footage keep the data on the local machine
or
Use a SAN device (with direct 6Gb SAS connections) rather than a traditional network server

Quick question - What OS is the server running?
 
Last edited:
I am using the PCI-e port for another card so the NIC's are using PCI.

This is ultimately a test to see weather this will improve the connection between the server and the clients machines. If succesful we will invest in better cards.

The previous plans were to install a fibre card and have the connection between the switch and the server be a fibre connection but after speaking to a few people on here they suggested teaming NIC's would be a cheaper but similar option. Doing it this way also allows for a certain amount of redundancy as well i suppose.

Well to start with it's pointless teaming more than two PCI gigabit cards for performance because one card will pretty much max out the available PCI bandwidth, two definitely will with ease. This would only be of use for redundancy really.

Also as a proof of concept it's rather self defeating as cheap cards don't have the throughput of decent ones. So you might not actually need multiple cards if you get a decent one.
 
Our current Server is running Windows Server 2008 with just the drives shared as normal. Each mac connects to it using smb! Am i looking at a re-format to change the file system to NFS?

Our current switch has 2 2gig fibre ports and what i originally wanted to do was put a fibre card in the windows machine and connect it to the switch via fibre as to me it seems as if the bottleneck is the connection between the server and the switch.

I need to find out more about final cut though and the codec etc it is using.

Our new server will be a Mac Pro and this should be arriving in a few weeks time.

All this information is great guys and i really appreciate your help.
 
I was going to suggest trying Microsoft File Services for Macintosh, but I'm fairly sure they've dropped it from Server 2008.
Anyone know of any good alternatives?

Are you going to be running OS X server on the Mac Pro?
 
Yes the New Mac Pro will be running OSX Server, our other windows server will also be changed to run Ubuntu as i thought as OSX is unix based it would be better to use the same file system.
 
It seems unlikely that a single disk is maxing out a gigabit connection, especially with one user (although you later suggest there are multiple clients). I would advise you do not waste your time with cheap NICs, it's a complete pain in the ass even with matched Broadcom or Intel.

I would suggest you have a problem elsewhere; use taskmanager/perfmon/system monitor or whatever comes with 2008 to ascertain where your bottleneck actually is. In all likelihood, you are disk i/o bound and you need to look at RAID 10.
 
Back
Top Bottom