10Gbps home network, on the cheap. Sort of.

Just thought I'd update this.

Have now rebuilt my NAS with the following specs :

Silverstone TJ08B-E
Intel i5 3450S
MSI Z77MA-G45 mATX
Corsair TX550M modular PSU
HP P410 RAID card
Mellanox MHEA28-XT
1x Verbatim 64GB SSD - WHS2011 installed
6x 1TB Samsung F3 in RAID 10 (about 2.8TB usable)

I was getting about 250MB/s transfer speed between the NAS and my main PC via the 10Gb connection which I thought was about the limit via SMB. But I followed this link :

http://windowssecrets.com/top-story/simple-change-in-settings-pumps-up-win7-networks/

Which has brought it up to about 300MB/s. Haven't checked yet but I'm probably almost maxxing out the drive throughput. But I'm very happy with that anyway.
 
Interesting find!

I'm quite impressed you've gotten even as high as 250MB/s with SMB and IPoIB. The best I can manage in an all-Windows setup with SMB shares is around 140MB/s.

Truth be told... I've got some proper 10GbE adapters, transceiver modules and fibre on the way. Anyone want to buy my Infiniband kit? ;) :o
 
Might start looking into this again, was hoping SMB 3.0 would be my cheap avenue to >1Gb, but MS haven't (as far as I can tell) implemented native NIC teaming into Win8 :/

First off I need to fix my server though. Storage spaces was a Letdown transfer speed wise (parity writes are limited to speed of slowest disk), but even more troublingly My PERC5/i has dropped out a couple of times, Driver related as far as I can tell.
 
The problem I have with this system is the excessive cost.
Gigabit speeds can achieve around 110MB/s. This is plenty for most HDs and SSDs. Costs are relatively low (I bought a gigabit router for £32).

So, to increase the speeds (over gigabit), you need to fork out £100s...which is a lot.
It might be better to store the items locally in your PC, using multiple SSDs.

Of course, if you have the money...then this is not a problem.

110MB/s is not "plenty" for most HDD and SSDs. It's about enough for most HDDs - just overkill for the majority, slightly underpowered for the rest. SSDs are faster and SSDs in a NAS will spend a lot of time waiting for your network, even at gigabit speeds.

Expensive, yes - potentially worthwhile? Also yes, depending on use case. If your NAS is used for a few files at a time and real-time performance doesn't matter too much, then you're probably not going to notice it taking ~10 seconds per GB instead of ~2, but for heavy workloads on a single machine on the network, this works well - and still leaves your gigabit connection available for all the normal traffic on your network.
 
Might start looking into this again, was hoping SMB 3.0 would be my cheap avenue to >1Gb, but MS haven't (as far as I can tell) implemented native NIC teaming into Win8 :/

First off I need to fix my server though. Storage spaces was a Letdown transfer speed wise (parity writes are limited to speed of slowest disk), but even more troublingly My PERC5/i has dropped out a couple of times, Driver related as far as I can tell.

As a bit of a revival, I put my tax rebate to work on this - got a PERC 6/i to replace my 5/i, after determining it was actually firmware related (tried lots of different ones from both dell and LSI, but they kept dropping out after a couple hours stress testing with errors).
Picked up a couple of Qlogic 6140's and a 3M CX4 cable for £50 total, but have run into a problem - OFED supports the cards, but doesn't support windows 8 / Server 2012 yet. I've spent all night trying to trick it into working but no joy. Guess I'll have to wait for a future release. As far as I can tell there are still developers working on the windows version.

When it does come we should get SMB RDMA support with Win8 though, so I'll hopefully get some nice numbers. I was hoping that SMB multipath would do the trick with three teamed adaptors on each box, but no dice - Win8 doesn't support native teaming like 2012 does, and mulitpath doesn't work over an intel Proset steam.
 
Back
Top Bottom