Hi,
Apart from NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS are they any quick wins for Netbackup 7.5?
Currently seeing transfer rates of around 30MB/s in an enterprise environment.
Master server - physical W2K3 R2, 16 cores, 16GB RAM
Media server - physical W2K3 R2, 16 cores, 16GB RAM
Clients - mainly VM guests running on ESXI 5.1 hosted on Gen8 Blades, some on older R900's. Storage is FC disk arrays.
I can't seem to spot where the contention is. I can get 250MB/s+ sequential read and write speeds using sqlio or other IO testing application. I can saturate the network connections when I test network throughput (1Gbit and 2Gbit). Yet when the backups are running, nothing of the following seems to show signs of bottlenecking:
Master server RAM, CPU, disk IO
Media server RAM, CPU, disk IO
Target backup VM RAM, CPU, disk IO
Any of the Cisco switches between boxes
AV has been turned off for the tests, doesn't seem to make much difference as I've spent hours defining exclusions.
Reason I'm testing the network is that the Netbackup servers have directly attached disk arrays and the Blades are on a FC 'SAN' type network.
I realise the storage is on the slow side, but I'd expect the best to be higher than the 30MB/s average I'm seeing. For info, on one physical client we see 80MB/s regularly, but another backup policy on this same box only gets 20MB/s. This leads me to believe that it's some form of a configuration issue, but I can't work out what. I've had limited success with adjusting NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS to numbers widely circled on Netbackup forums but I still feel that there's room for improvement.
Apart from NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS are they any quick wins for Netbackup 7.5?
Currently seeing transfer rates of around 30MB/s in an enterprise environment.
Master server - physical W2K3 R2, 16 cores, 16GB RAM
Media server - physical W2K3 R2, 16 cores, 16GB RAM
Clients - mainly VM guests running on ESXI 5.1 hosted on Gen8 Blades, some on older R900's. Storage is FC disk arrays.
I can't seem to spot where the contention is. I can get 250MB/s+ sequential read and write speeds using sqlio or other IO testing application. I can saturate the network connections when I test network throughput (1Gbit and 2Gbit). Yet when the backups are running, nothing of the following seems to show signs of bottlenecking:
Master server RAM, CPU, disk IO
Media server RAM, CPU, disk IO
Target backup VM RAM, CPU, disk IO
Any of the Cisco switches between boxes
AV has been turned off for the tests, doesn't seem to make much difference as I've spent hours defining exclusions.
Reason I'm testing the network is that the Netbackup servers have directly attached disk arrays and the Blades are on a FC 'SAN' type network.
I realise the storage is on the slow side, but I'd expect the best to be higher than the 30MB/s average I'm seeing. For info, on one physical client we see 80MB/s regularly, but another backup policy on this same box only gets 20MB/s. This leads me to believe that it's some form of a configuration issue, but I can't work out what. I've had limited success with adjusting NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS to numbers widely circled on Netbackup forums but I still feel that there's room for improvement.
Last edited: