10gbit hdd setup

Caporegime
Joined
7 Apr 2008
Posts
25,415
Location
Lorville - Hurston
HI.

what hdd do i need to achieve 10gbit transfer rate between two computers that have a 10gbit network card attacked to my switch that has 2 10gbit connectors on it?

I currently have my main desktop that has the following:

  • 1 nvme 512gb OS drive
  • 4x sata 3 ssd 2.5 drives
  • 3x 3.5 mechanical hdd (WD-RED)

My server pc has the following:
  • 6x 3.5 mechanical hdd(WD-RED)
  • 1x 2.5 SSD sata 3 OS drive(ubuntu server)

When i copy files between any of my files in my main pc i only get max 1.5gbit speeds if that.

nowhere near 10gbit and i assume this is because of my server pc usng WD-RED hdd's?

Does my server AND desktop need to transfer between nvme storages?
 
First thing I would do is test transfer speeds with no disks involved to see what's what with that first (you can get tools that move files to / from memory)

Something like iperf maybe?
 
If you’re reading/writing to mechanical HDDs then they’ll only do maybe 200MB/s or 1.6Gbps you’ll never get 10Gbps transfer. Your maximum transfer rate will be only as fast as the weakest link and that’s your mechanical hard drives. You’ll either need to use RAID, different drives, some sort of caching mechanism or combination thereof to saturate 10Gbps.
 
You need the disks to be in a high performance array along with a high speed card that they are attached to. Best bet is to get a HBA card (or a raid card that can be flashed to IT mode) and set them all up into one ZFS array.

Onboard SATA ports are notoriously poor at multi disk transfer speeds, especially with normal motherboards (i.e. not server boards), and you will never get 10Gb speeds even if the attached array could theoretically do it. You have to use a card designed to do the job to get the speeds, but with just 6 disks you wont hit 10Gb, near 5Gb is possibly achievable.
 
AHCI SSD's crap out at around 550MB/s in the real world on SATA6, you're running SSD's on SATA3 which halves that, at best your WD RED's will hit about the same sequentially depending on the capacity/generation/interface. If you can't read/write at over 1GB/s on both ends, then you've just shifted the bottleneck from the network interface to the storage, and that's before you deal with the protocols and system overheads to actually do 10Gb on both ends. It's a subject that needs thought and planning to understand the benefits and implications, it's a novelty having two boxes with a RAM disk and a 10Gb link between them, but realistically pointless in the real world for all but very niche situations, same with small NVMe drives, you're talking minutes of read/write time to fill a disk, is that suitable for your needs?
 
Back
Top Bottom