Two Nics, Only one's throughput!

Associate
Joined
8 Nov 2005
Posts
1,161
'm trying to do this without NIC bonding for the moment, as I didn't
have much luck with it and recompiling the kernel only seemed to add to
my problems.

I have Computer A and Computer B
Each computer has 2 NICs in it.
They are connected to each other via two Crossover cables.
One pair of NICs has a 10.0.0.x subnet and one pair has a 192.168.1.x
subnet.
Computer A has the IPs, 10.0.0.1 and 192.168.1.1
Computer B has the IPs, 10.0.0.2 and 192.168.1.2
Using NFS I've set up one NFS folder per pair of NICs, each connected
to a different IP address.
So that computer A is connected to a folder on 10.0.0.2 and another
folder on 192.168.1.2 (which are obviously both Computer B).

The purpose of this is so that when I copy a file from each folder at
the same time, I should get the full bandwidth of both cards (ie;
200mbps total) correct?

This isn't what I'm seeing.
When I start one transfer, it goes at around 11-11.6Mbytes/sec, then
when I start the second, it drops down to about 5.5, with the other
transfer going at an almost identical speed.
Why do they seem to be sharing the same bandwidth, which as far as I
can see isn't possible?

Any ideas on how to solve this would be MUCH appreciated!

Thanks in advance!
 
Aye, I'm sorry I couldn't explain it, better, I've drawn it out in visio so maybe that'll convey it better.

The main idea itself
networkinfrastructure1bs.jpg

The first transfer, going at a sensible 10mbytes/second
transfer18cm.jpg

Then after I've started the second transfer on the second nic, which bizarrely causes the first transfer to go at half its original speed and the second transfer to go at the same speed
transfer26ks.jpg
 
Caged: This is a small scale model of what I want to do with gigabit cards, and why exactly won't it work? I can't see a logical reason.

Deano: Reading off some bog standard Maxtor SATA drives to some bog standard IDE Maxtor drive, but even so it should be able to manage more than 11meg/sec quite happily.
And it's so EXACTLY 100mbits/sec that it seems crazy to me that it's purely coincidence.
 
You know.. it's not the hard disk/io interface.
I've just tried copying a file from the hard disk of computer A to another location on the hard disk of computer A, and I'm getting about 12-15mbyte/sec, and it fluctuates wildly whereas the network transfer was rock steady.
And reading and writing from the same disk has got to be slower than just writing to it.
Oh and I tried it on Computer B, that's about 20mbytes/sec!
 
Yes, I'm well aware of the other bottlenecks in the equasion, this is why I'm trying it with 10/100 cards at the moment.
When I do use gigabit cards it will be to 10k SCSI drives with Intel Server PCI X gigabit cards, which should handle it quite nicely.

And anyway, even in windows, even with bog standard PCI, I've managed 30-40 megabytes a second FTPing data :)
 
electrofelix: That is how I had it setup yeah, good idea on the ethereal captures.
Also I just tried connection bonding on them again, same thing, ie; I try a transfer from one computer to another using the single bonded IP address and get 11MB/sec again.

I'll see what happens when I try this with proper server hardware and report back :)
 
Back
Top Bottom