10gb ethernet + windows shares.

Soldato
Joined
11 Jun 2003
Posts
5,229
Location
Sheffield, UK
It's another set of "stupid" questions.

Will 10gbe give me greater than 200MB/sec read/write on a plain windows shared drive (assume the underlying storage can handle it). Keep looking at options but I tend to run into some form of ISCSI which... doesn't really match my needs.
 
Last edited:
My basic aim is to move my main raid to a homebrew NAS box.

As such it would hold pretty much any/everything. Think about as varied as someones steam folder.

I'm happy to use pretty much any protocol that would allow access to multiple machines on the network simultaneously so the normal ISCSI setup is a 1 server, 1 client machine type effort and wouldn't really do. If I install an ISCSI initiator on the same box the drives are housed in so it can see the drives locally then (as far as I know) there's issues with potential corruption on the ISCSI target when another box wants to access it.

I'm going to stick the 10gbe card in there along with using the standard NIC and they'd both need to be able to access the drives. The OS would need to route all traffic for a specific IP address (my main machines) over the 10gbe link and everything else over the regular gigabit. Easy enough in windows so I'd assume most OS's would handle it.

As such unless there's something I'm not so aware of I'm planning to using simple TCP/windows shares.

I don't mind the NAS box being another OS if needed.
 
Last edited:
Generally speaking, the answer to your question would be you'd be limited to the speed of your disk(s) rather than the network. So with a single modern consumer SATA drive I'd expect to see 150MB/s+ rear speed over 10GbE and SMB.

iSCSI would quite possibly give you a performance advantage but I doubt there'd be much in it, and as you say it comes with some inflexibility for your situation.

My setup uses 4Gb Fibre Channel to present the storage (ZFS with 6 disks, effectively RAID10) to my VMware hosts, and I have 10GbE links from each host to my main workstation (look for Mellanox ConnectX-2 EN adapters if you can, often fairly cheap). I can get around 350MB/s in a benchmark from my workstation over the 10GbE link to SMB shares on LUNs from the SAN, and copying large files isn't far off that. Even writing to a single basic disk in a host will hit the kind of speed I mentioned in the first paragraph.

The nice thing is you can then genuinely exploit the raw speed of your NAS/SAN. If you move a lot of data around regularly or do things like video editing with your material sat on your storage, it's great. And if you're anything like me then it's fun to play with too ;)
 
You must have deep pockets, setting up a 10Gbit network is not exactly cheap!

Have you thought about using dual 1Gbit adapters Teamed, effectively giving you something approaching 2Gbit of bandwidth.

The other option is to use 1 to many arrangement where the SAN is 1 and your clients are the many. The SAN will have dedicated NICs for each client! Purely for SAN and SMB storage access.
 
Generally speaking, the answer to your question would be you'd be limited to the speed of your disk(s) rather than the network. So with a single modern consumer SATA drive I'd expect to see 150MB/s+ rear speed over 10GbE and SMB.

iSCSI would quite possibly give you a performance advantage but I doubt there'd be much in it, and as you say it comes with some inflexibility for your situation.

My setup uses 4Gb Fibre Channel to present the storage (ZFS with 6 disks, effectively RAID10) to my VMware hosts, and I have 10GbE links from each host to my main workstation (look for Mellanox ConnectX-2 EN adapters if you can, often fairly cheap). I can get around 350MB/s in a benchmark from my workstation over the 10GbE link to SMB shares on LUNs from the SAN, and copying large files isn't far off that. Even writing to a single basic disk in a host will hit the kind of speed I mentioned in the first paragraph.

The nice thing is you can then genuinely exploit the raw speed of your NAS/SAN. If you move a lot of data around regularly or do things like video editing with your material sat on your storage, it's great. And if you're anything like me then it's fun to play with too ;)


So just to confirm, you are able to hit 350MB/sec read/write using 10gbe and SMB? That's basically what I'm after.
The storage end will be handled by the perc 5/i which is more than happy to scale up to around 400/500MB/sec with enough disks thrown at it. My only issue was getting similar throughput over LAN with SMB's. The 100ish limit on gigabit is holding me back from taking the plunge.

I'm aware 10GBE isn't CHEAP at the moment. Slightly older/server pulled kit can often work pretty well.

I only need a high bandwidth connection from my main box to the NAS. I'm planning to cut the drives in my main rig down to an SSD boot drive and maybe a WD green or similar with everything else on the NAS box hence the need for higher throughput (to keep close to the raid speeds I'm used to).

The rest of the machines in the house can use the same disks as general NAS (will be using routes to make sure traffic to/from my machine goes over the 10GBE nics the rest over the regular).
 
Last edited:
You must have deep pockets, setting up a 10Gbit network is not exactly cheap!

Have you thought about using dual 1Gbit adapters Teamed, effectively giving you something approaching 2Gbit of bandwidth.

The other option is to use 1 to many arrangement where the SAN is 1 and your clients are the many. The SAN will have dedicated NICs for each client! Purely for SAN and SMB storage access.

Teamed solutions are a bit irritating in that they firstly require specific nics that support teaming at the driver level and secondly will still only give the speed of a single link point to point.

The speed increase is only seen in the total bandwidth available to all clients of the server. There's a few tricks to get around it but they aren't easy to set up and generally a bit more hassle than they are worth.

Infiniband works for faster speeds but doesn't work well with SMB's, hence the bit about iSCSI. Infiniband is generally built to be a 1 to 1 setup between server and it's own personal network storage using iSCSI initiators+targets which basically leaves only 10GBE as a viable option.

It's not cheap but rather than disks in each machine in the house + maybe some NAS based storage I can build 1 box with 10GBE and 1 regular gigabit connection and cover all bases. Rather than smaller disks all over and making sure they are all backed up etc I just throw 4 2TB drives in this box, get a decent backup scheme set for it and forget about storage. The box will be left on and each PC/laptop/etc I own will have the NAS drives mapped to look like a local disk (works for... 90% of things - not the regular "map as drive letter" but using a few other windows commands ). Only my box needs the uber speed connection as I intend to put my steam folder on the NAS :)
 
Last edited:
So just to confirm, you are able to hit 350MB/sec read/write using 10gbe and SMB?
Yes. That's with Windows Server 2012 and Windows 7. With Windows 8 I'd expect it to be a little faster as SMB3 is quite a bit better than 2.1.

I only need a high bandwidth connection from my main box to the NAS. I'm planning to cut the drives in my main rig down to an SSD boot drive and maybe a WD green or similar with everything else on the NAS box hence the need for higher throughput (to keep close to the raid speeds I'm used to).
This is basically the same route I've gone down. I only have a 256GB SSD in my workstation. Everything apart from the OS, software installs, Steam and Premiere Pro cache data lives on the network.

A word of warning though - are you prepared for downtime on your storage? The more you move off your workstation (e.g. documents, favourites, desktop, start menu, etc etc) the more of a problem you'll have if your storage is unavailable. My workaround is to run two file servers, one as a primary that's backed by the fast storage, and one that is backed by a couple of large traditional disks. I use DFS to keep everything in sync and to handle the failover, and a great job it does too.
 
Aye, storage wise my current setup seems pretty solid. It's not run 24/7 so that might be the test for it but I've been running 4x 1TB spinpoint F3's in raid 5 over a perc 5/i hardware raid controller for about....3 years now with almost zero issues. I had a little trouble when I added an extra couple of standalone drives but it seems my poor old 500W PSU was being stretched a little too thin. Upgraded to a 750W and it went back to flawless operation.

The cards certainly standing up well to having rather hot running neighbours and only fairly minimal airflow. That will all be moving to the NAS box and likely replaced with 4x2TB Reds. Raid 5 means storage downtime should be minimal and with the odd problem, I've had experience with the cars monitoring tools too (they run happily over a network). I'm overall VERY happy my storage setup is man enough. Just to crack the network side of things :)

I guess your also hinting at the obvious pain if the network link has trouble for any reason and storage plain isn't there or the machine hosting the card goes pop. At least in both cases it's a raid on a dedicated card my main PC has experience hosting so I can just fall back on "plug it back in".

Many thanks for all the info etc, that's got me pretty much down to shopping list. Cheers for all the input :) I'm probably going to abuse a HP microserver for the host box (though likely transplant the innards to something a little more roomy so I can get the storage card and 10GBE nic in there) so I'm probably looking around £250 all told (minus new drives) which isn't SO bad. Cheap 2nd hand cards seem to be in the £60-80 sort of region and the box the same.
 
Last edited:
Back
Top Bottom