Slow file transfer over network

Soldato
Joined
4 Feb 2007
Posts
9,899
Location
Nuneaton, UK
Edit - I've fixed this, it wasn't a network issue. I disabled Windows write-cache buffer flushing and changed 'Cache mode' to 'Write back'.

I have made a few changes all at once so it may have made this more difficult.

I recently built a new server, Dell PowerEdge T30 with a 120Gb SSD i had spare for MS Server 2016, added a 4 port SATA card and then 4 x 8TB Seagate Ironwolf drives configured in RAID 5. The RAID array is on the 4 motherboard SATA ports, the SSD and optical drive are on the 4 port card.

I also changed my router from an Asus N66u to a small box running pfsense.

If I transfer a large file from my desktop the speed starts off at 115 MB/s so I guess saturating the 1Gb network, but then it slows to about 20 MB/s for the rest of the transfer. Between my desktop and the server there are two TP-link Gb switches. My old server didn't have RAID and I just checked it, sustained 100 MB/s transfer no problem.

Is the problem with the new server or something else?
 
Last edited:
Yes, if you can afford to loose the capacity.

You're much less likely to loose the array during a rebuild. It'll be faster as well.

As always RAID != backup so anything important should be backed up elsewhere.
 
Yes, if you can afford to loose the capacity.

You're much less likely to loose the array during a rebuild. It'll be faster as well.

As always RAID != backup so anything important should be backed up elsewhere.

Yea I looked at Raid 10 too, 5 is better because I have more capacity but it did take a long time to verify the array, but now the disk speed is very fast. 650 MB/s read and 560 MB/s write.

I'm better off than I was before as I had data on single disks before. I also have a NAS so anything important will also be on there.
 
Yea I looked at Raid 10 too, 5 is better because I have more capacity but it did take a long time to verify the array, but now the disk speed is very fast. 650 MB/s read and 560 MB/s write.

I'm better off than I was before as I had data on single disks before. I also have a NAS so anything important will also be on there.

Verification time is irrelevant, this is a case of its highly likely that if a single drive of that size fails and you replace it, you will suffer some degree of data loss during rebuild, up to and including the full array. Trust me when I say that recovering data from a R5 array is a ball ache. This is why people stress that RAID is *NOT* a backup. How important that loss is depends on the data stored, your backup strategy etc.

Unless you have a specific need for Windows on bare metal, perhaps consider UnRAID? You use choose how many parity drives to use, expanding an array is quick/easy, the VM/Docker management is superb. With the use of a cache drive writes are quick, reads are limited to the speed of the drive the data is stored on, but you can pass drives/resources direct to VM’s quite easily.

On the face of it those read/write (especially the write) speeds seem unlikely for RAID5.

I agree, but it’s theoretically possible to get results like that depending on how the numbers are obtained and what the benchmarking method was. They do look optimistic.
 
Back
Top Bottom