Abnormal Network Transfer Speeds

Soldato
Joined
5 Jul 2003
Posts
16,206
Location
Atlanta, USA
Hi all,
I'd try to describe what i mean but a screenshot would be better:



Any ideas why this is happening?
Its causing bad transfer rates.
This is what's occurring when Windows Backup from server A (a VM) transfers data over the network (LAN Team 1) to a Backup Server (which the screenshot is from) to a iSCSI attached Drobo (strait cable connection on LAN iSCSI 192).

I changed the connection from going through a switch to a direct ethernet connection and for half hour the speed was nice and constant.
Freshly built backup server, freshly configured Drobo (both as of today).
No AV on server yet, just latest updates and the Drobo Dashboard server exists on it.

Any ideas?

Thanks in advance all.
 
My only theory's thus far is that the speed increase on a new transfer was because that part of the RAID array was built on the Drobo...
 
Erm is Remote Differential Compression turned on? I had similar things with this on and also disbabled Receive Side Scaling, so can't be 100% sure what was the cause.
 
[Darkend]Viper;19147738 said:
Erm is Remote Differential Compression turned on? I had similar things with this on and also disbabled Receive Side Scaling, so can't be 100% sure what was the cause.
I dont think its on, its not installed as a windows component on either the source server or the destination.

Whats odd is that first run of a backup does the strange transfer rates as shown above, but a second run, constantly high! :confused:

Just a thought; could it be a cat5 cable rather than 5e?
do you have jumbo frames turned on?
Brand new 5e cables, and Jumbo frames isnt something i can turn on on the Drobo :(
Ive just set the MTU on the second LAN (that goes to the Drobo) to 8000, its still doing the same as above, but the troughs in the graph are now a little higher...
 
Last edited:
Im not sure its the Drobo causing this...
Just doing a 100Gb backup from the source server, to a local RAID array on the backup server, and its inhibiting the strange abnormal transfer rates....?
Hmm, interesting.

Im wondering its a motherboard bandwidth problem....although i doubt that tbh.
RAID card for the system is in a PCI-X slot, Dual Gig for main network connection is PCI-E x4, and gig to Drobo is onboard (i assume either PCI or PCI-E).

Just now testing from source server to another different server to see if its the source or indeed the Backup Server.
 
Last edited:
Its not a driver/teaming issue? seems to be almost peaking at 100mbit thats why I'm wondering
I'd be surprised, as that wouldn't explain the drops...?

Both NICs in the team are connected at 1Gig, the virtual adaptor reads 2Gig, and the Team is set as smart load balancing/redundancy.

Copying from the source server to a different server flys along without issue.
So its definitely the LAN side of the Backup server.
Hmm...

##EDIT##
I've just killed the team and configured it to use just one 1gig nic to the main LAN, testing now...odd. Single 1gig nic, constant transfer rate of about 4-5% utilisation, so on 1 gig, 40-50Mb/s.
At least it should be. According to the process monitor for the backups, its taking 18 seconds to do every 0.1Gig, thats 5.5Mb/s !
 
Last edited:
what about using the other nic in the team, see if thats any better?
I'd be surprised if it was, but i'll give it a try when im next in work (working remotely atm).
Ive generally tried to have teams set to be on the same chipset/card.

Trying to think what else it could be.
 
If killing the team fixes it then it's either the teaming setup itself (using windows load balancing at layer3 aswell as layer2 can do strange things), the driver being used is buggy and worth updating orrrrrrrrrrrrrrrrrr it's a good ol'fashond Layer1 issue like you've got half the team patched into one VLAN/switch and the other half into another. I.E muddled which NICs are standalone and which are teamed when physically wiring up. - Setting it to use the other NIC (previously teamed) on it's own should shed light on this.
 
If killing the team fixes it then it's either the teaming setup itself (using windows load balancing at layer3 aswell as layer2 can do strange things), the driver being used is buggy and worth updating orrrrrrrrrrrrrrrrrr it's a good ol'fashond Layer1 issue like you've got half the team patched into one VLAN/switch and the other half into another. I.E muddled which NICs are standalone and which are teamed when physically wiring up. - Setting it to use the other NIC (previously teamed) on it's own should shed light on this.
It doesn't strictly 'fix it'.
It fixes it as in: it 'smooths out' the transfer rate, but its still far slower than it should be.

Not using Windows NLB, as correct me if im wrong, its not actually used for teaming, its to load balance services (as evidenced by the CAS array i setup at work :p) ?

Im using the Broadcom BACs suite to setup the teaming as they are Broadcom NICs. Using the latest drivers, no VLANs, all servers & NICs involved all going into the same switch.
 
Could still be software in that case. Have you tried just lumping a DVD/VHD image across?
Yes, seems fine.

Im awaiting a colleague at work whos got physical access to the server to swap the cables over so i can try teaming the onboard nic with the addin nic. Rarther than addin+addin.
 
Yes, seems fine.

Im awaiting a colleague at work whos got physical access to the server to swap the cables over so i can try teaming the onboard nic with the addin nic. Rarther than addin+addin.

I'd say it was a software issue based on that. The backup process may well be compressing/deduping before copying.
I've just run windows backup from a laptop here and it has spent 90% of the time with 0 activity and every so often maxes out the NIC for all of 1/4 second. This to me seems a similar pattern to what you see only more spread out. Which would kinda make sense given a server will have far greater disk I/O and CPU grunt than a laptop.
 
I'd say it was a software issue based on that. The backup process may well be compressing/deduping before copying.
I've just run windows backup from a laptop here and it has spent 90% of the time with 0 activity and every so often maxes out the NIC for all of 1/4 second. This to me seems a similar pattern to what you see only more spread out. Which would kinda make sense given a server will have far greater disk I/O and CPU grunt than a laptop.
Im not sure that theory is correct though, as said, transfer from the source server to another file server is a constant speed of about 90Mb/s.
Its just to this one backup server thats newly built with the latest drivers, where it causes issues. Irrespective of teaming or not. :(
 
Just removed the Broadcom drivers and Broadcom network utility, and installed the HP ones for the card (which bizzarely doesn't pick-up the on-board as a HP. :p).
Testing now...
 
Back
Top Bottom