Fast connections between PC's

Associate
Joined
7 Nov 2025
Posts
3
Location
Hereford
Hi - I have a cluster of PC's that I use to collect data from my astronomy kit - and I want to figure out the best/fastest way to transfer data files (usually a few Gbytes) from each PC onto a central store. Current setup has a 1Gb switch that they are all connected to. Would a 10Gb switch help, or fibre links? Any ideas welcome
 
'a few gb's' isn't really that much over gigabit, how long are transfers actualy taking and what actual size are the transfers?

Is there a bottle neck on your lan somewhere?
 
thanks - so main issue is when I am collecting data from the 4 active PC's and want to store then on either a single PC, on my local home cloud unit or a local NAS drive - each files 300Mb takes over 60 seconds per files and I might have 50+ per PC - so takes ages to get the data from the observatory to a single storage point?
 
300Mb 60 seconds? That seems slow even for hard drives

Have the PCs/NAS at each end got SSDs? How are you transferring the files? SMB Shares or some other protocol? Push vs Pull?
 
all the units have SSD - transfer are via simple file explorer transfer (could this be the issue?) - which I assume are "push" actions ffrom one PC to other? All starting to get beyond my understanding hence the questions cheers
 
thanks - so main issue is when I am collecting data from the 4 active PC's and want to store then on either a single PC, on my local home cloud unit or a local NAS drive - each files 300Mb takes over 60 seconds per files and I might have 50+ per PC - so takes ages to get the data from the observatory to a single storage point?
Based on your file size and transfer time, you don't have a throughput bottleneck with 1Gb connectivity. Your bottleneck lies elsewhere. On 1Gb, a 300MB (assuming megabyte) file should take a few seconds (transfer speed should be around 120MB/s).

Outline these things for us:

Frequency of transfers
Number of files / total transfer size - I assume 15GB
Storage on each device that transmits and receives these files - PC 1 - HDD, PC 2 SSD, etc.

Edit: just saw your newer post. If you are using SSDs (even SATA-based) these should be maxing out the 1Gb throughput no problem. You have an issue somewhere.

Edit 2: seen you're in Hereford. PM me and we could look at resolving this issue in person and automating this transfer for you.
 
Last edited:
Are all your computers actually connecting at Gigabit? Presumably they are plugged into a Switch (is that showing a gigabit light?)

Presumably no wifi or powerline extenders are involved anywhere, as they are a very likely cause for any transfer speed issues
 
You need to describe the process. How is this data transferring, you are dragging it to a central shared folder? Where are you currently storing these files, if it's a NAS what is the brand and model? Draw out the network topology.
 
Ignoring protocols, overheads and inefficiency, each file being 300Mb should take roughly 3 seconds. That won’t meaningfully change with 10Gb. Document your workflow, confirm each link in the process by testing it, and proving its capacity, this should identify the bottleneck. It’s not difficult, it just requires a systematic and methodical approach.
 
Last edited:
In general, assuming accurate, those 300MB files taking 60 seconds (optimal guestimate) means those files are being transferred at around 5MB(ytes) per second or 40Mb(its) per second. That's far away from 1Gb(it) per second transfer rate possible via the swtich (~110MB(ytes) per second). Which as many have mentioned, suggests there is something else slowing everything down to that rate.

1. What is the switch? Can you give us the brand and model number?

2. Is each device (PC handling the central store, and each one handling an astronomy kit) all with a 1Gb(it) network port? Or just the switch is 1Gb(it) speed and the ones on each computer are slower speed?
 
Ignoring protocols, overheads and inefficiency, each file being 300Mb should take roughly 3 seconds. That won’t meaningfully change with 10Gb. Document your workflow, confirm each link in the process by testing it, and proving its capacity, this should identify the bottleneck. It’s not difficult, it just requires a systematic and methodical approach.
It would. Parallel transfers on 10Gb would make a big difference. File explorer loves a sequential transfer, though, as I'm sure you know. No point doing parallel on 1Gb, however, as the SSDs should top out the 1Gb throughput easily.

Tossing away 1Gb for overheads (being overzealous here), that leaves 9Gb for transfers and, at 1125 MB/s, could easily transfer three of those 300MB files in parallel.

@op I could easily automate this for you and future proof it with parallel transfer capabilities that you could enable/disable at will.
 
It would. Parallel transfers on 10Gb would make a big difference. File explorer loves a sequential transfer, though, as I'm sure you know. No point doing parallel on 1Gb, however, as the SSDs should top out the 1Gb throughput easily.

Tossing away 1Gb for overheads (being overzealous here), that leaves 9Gb for transfers and, at 1125 MB/s, could easily transfer three of those 300MB files in parallel.

@op I could easily automate this for you and future proof it with parallel transfer capabilities that you could enable/disable at will.
That’s a lovely idea, but unfortunately the issue here is not a saturated gigabit pipe. Op is using less than 5% of what they state that they have available. Fix the actual bottleneck and in very simple terms it goes 20x faster, that brings the current 50+ minute workload down to what’s probably something like 3 ish minutes in real life.

That’s assuming we aren’t dealing with a remote connection that’s upload limited (observatory) or a bottleneck pulling data from the equipment on-site, but 40mbit is an odd number that doesn’t fit easily into my logic unless perhaps a VPN is involved. We really need proper details from op to diagnose the issue.
 
I rekcon theres something fundamentaly wrong, here, it shouldn't be that slow, and it's doesn't sound like upgrading a 1gb switch to faster will make any difference.

OP can you tracert or pathping between the machines? that might proivide some insight?
 
That’s assuming we aren’t dealing with a remote connection that’s upload limited (observatory) or a bottleneck pulling data from the equipment on-site, but 40mbit is an odd number that doesn’t fit easily into my logic unless perhaps a VPN is involved. We really need proper details from op to diagnose the issue.

If they're wired in to a local switch, then it shouldn't be a remote connection then (that uses a VPN to connect). However, thinking about it now, could REALLY long lines be involved? (Beyond the length for the spec of the cable perhaps?)

Was also thinking 40Mb(it) per second didn't really fit any cable. But that was using 60s as an actual base line, to which OP did say 60s+ (plus), meaning it's likely more, but has seen 60s before (as best optimum transfer). So that got me thinking maybe 5 x 20Mb(it) per second connections? Especially if all devices are transferring at the same time to the core system. Which then suggests the swtich might not be a 1Gb(it) switch or has been set to 100Mb(it) rather than run at 1Gb(it).

So many possibilities here, need more info from OP.
 
That’s a lovely idea, but unfortunately the issue here is not a saturated gigabit pipe. Op is using less than 5% of what they state that they have available. Fix the actual bottleneck and in very simple terms it goes 20x faster, that brings the current 50+ minute workload down to what’s probably something like 3 ish minutes in real life.

That’s assuming we aren’t dealing with a remote connection that’s upload limited (observatory) or a bottleneck pulling data from the equipment on-site, but 40mbit is an odd number that doesn’t fit easily into my logic unless perhaps a VPN is involved. We really need proper details from op to diagnose the issue.
Well aware that it's not presently being saturated. That's why my earlier post said:

If you are using SSDs (even SATA-based) these should be maxing out the 1Gb throughput no problem. You have an issue somewhere.
The 1Gb is not being utilised - that's been well established long before you posted.

However, you said:
Ignoring protocols, overheads and inefficiency, each file being 300Mb should take roughly 3 seconds. That won’t meaningfully change with 10Gb.
Yet I demonstrated how 10Gb could be at least 3x faster over 1Gb using parallel transfers. Of course, that entirely depends upon your definition of "meaningful" which is far too subjective to quantify. Maybe the 1Gb alone would be enough; maybe the OP doesn't have any time to waste doing this everyday but has no choice? Only the OP knows.
 
I demonstrated how 10Gb could be at least 3x faster over 1Gb using parallel transfers. Of course, that entirely depends upon your definition of "meaningful" which is far too subjective to quantify. Maybe the 1Gb alone would be enough; maybe the OP doesn't have any time to waste doing this everyday but has no choice? Only the OP knows.

That's ignoring the problem though, buying a faster switch would be like buying a jet plane to go to the corner shop to buy a packet of crisps.
 
Back
Top Bottom