Home NAS Overhaul - OS Advice

Soldato
Joined
26 Apr 2004
Posts
9,584
Location
Milton Keynes
Hi Guys

Hoping you guys might be able to give me a bit of advice, based on your own experiences, as whilst I do have SOME NAS experience, it's really not ALL that much.

I'm currently in the middle of prepping for a NAS upgrade for family, as the drives are so damn old I'm terrified they'll fall over at a sneeze (oldest has ~11 years of power on hours...). In line with this, my current plan is to leave the current NAS running UNTIL the new NAS is running OK and data has been successfully migrated, so if anything goes wrong, it's not game over. Safer to duplicate first, tamper second! The old NAS was also not TERRIBLE but nothing special, i3-2110, and 8GB of RAM, utilising 4 internal 2TB drives, 1 internal 4TB drive, and an external 500GB drive.
Thanks to some members here, I've managed to put together an i5-3570k, 24GB of DDR3 running at 1600MHz (the RAM is 1333MHz but runs fine at 1600MHz, I have literally run memtest for days on end on the thing across multiple sessions of multiple days at a time to ensure the memory is not generating ANY errors), and 8x 3TB drives. They're used, but the oldest of them is a less than a 3rd of the age of the current oldest drive, and half of them are even younger, so feel like they have a few years in them at the least, especially as this time around they're older CMR WD Reds NAS drives which seem to be fairly well known for reliability. By contrast the old drives are quite literally JBOD! I am however planning to pull the 4TB drive from the old NAS and just move it over, as it appears to be considerably newer than most of the other drives, and likely has a bit of life left in it. It'll take me at least a few days to get through doing a surface/sector check on all the drives one by one just to be sure they're 100%, but hoping to get started on the build proper end of the week.

The old NAS OS is OpenMediaVault (albeit a much older version, I believe 3/4, and it seems to be on 6 now), using a combo of EXT4 and NTFS formatted drives. They are being shared over a home network only via SMB.
As I've used OMV myself before in the past, my first thought is to just use that again, as at least I should be able to get my head around it if the current version hasn't changed too much since the last version I used.
However, as this machine is now rocking a quad-core, and a LOT more memory, I wondered if maybe it'd be worth looking at a more powerful solution as OMV at least in the past was more about the basics done well, relatively simple and relatively light, I see Rockstor and TrueNAS also seem to get pretty decent ratings?

To help narrow down, ideally what I'm looking for is:
1) Something relatively easy to administer/maintain/setup, I've been a computer hardware enthusiast for decades at this point, but have never been big on networking/NAS/Linux, so ideally easy to install and GUI all the way, no/absolute minimal Linux CLI required.
2) Samba shares, so windows has no issues accessing (configured as full guest access so no login/password required)
3) Best performance possible, considering SAMBA
4) Ideally, I'd like to use the 4 youngest of the 8 newer HDDs, to essentially 1:1 replace the 2TB drives of the old server with larger 3TB drives and absorb the 500GB into the additional capacity of these drives to reduce the amount of disks in use at once for power efficiency and SATA connection limitation purposes, whilst still offering a noticeable uptick in capacity (extra 3TB roughly after absorption of the 500GB previously on the external USB drive)
5) The remaining 4 drives would be either kept around as spares, OR dependent on the complication of doing so and the OS used, maybe the first 4 drives could be setup as a pool and the 1 or more of the extras used as parity/mirroring drives to prevent data loss if a either a data or parity drive fails and that is easy to setup?
The new machine has 8 total SATA (https://www.gigabyte.com/Motherboard/GA-Z77X-D3H-rev-10#ov) 2x SATA 6Gbps/4x SATA 3Gbps Intel PCH, 2x 6Gbps via Marvell 88SE9172 so was thinking the 4 main 3TB could all go on the SATA 3Gbps ports for a combination of tidiness and just being relatively easy to work out what was connected to what at a later date, the 4TB could go on a PCH 6Gbps along with the ODD, and that leaves the 2 Marvell 6GBps (apparently this controller is actually fine unlike some of the earlier Marvell SATA controllers) for either parity drives or so I could wire up the case's front external Hotswap SATA bay.
6) The 4TB from the old NAS (EXT4 formatted) would be transferred over wholesale (once the new server is working and the 2TB/500GB drives have all been duplicated to the new servers 3TB drives). I'd like this drive to just be picked up and usable by the new NAS with no reformatting etc, just pop in, setup up the SAMBA share on the pre-existing data and move on, independent from whatever the solution is for the 3TB drives.
7) Plan is for OS to live on a USB stick, usually isn't an issue for NAS stuff, but wanted to mention just incase.

Thoughts?
 
Last edited:
It's a home NAS filled with the likes of thier CDs, movies etc, nothing on there is crucial (or work/profession related) which is why the current drives are so old (average is around 9 years power on time), and they cannot afford to throw hundreds on new HDDs and a brand new rig, but as I have the hardware now to upgrade the core server platform and swap the drives to considerably newer ones that are aimed towards reliability and NAS usage (rather than being normal desktop drives), with a capacity bump in top, it just seems like a no brainer. The current setup is on very borrowed time after all, with only the 4TB drive which is the youngest not being replaced.

Just trying to work out the best OS to go with based on the above points, as right now all the drives are standalone, but I can definately see the benefit of an OS that'd make better use of that RAM and additional processing, and also the benefits of the disks being pooled into one share with an additional drive or two just to give some additional resilience compared to what was in place before.

Regards putting the OS on an SSD, I could do that 100% given how cheap they have gotten, but it'd tie up a SATA connection and power connection that I'd rather leave free, and as the OMV/USB boot drive is really literally only used during boot/configuration changes/updates and everything loads into ram, it's not caused any issues thus far.
 
Last edited:
I did look at Unraid but largely discounted it due to the fact it's a paid system and there are a decent number of free alternatives which get good feedback, and I heard due to the parity equations it runs, that performance for writes could tank a bit, which could get frustrating.

In all seriousness, the data that NAS holds isn't crucial, it'd just be hassle to re-rip and restore the stuff they care about using day to day or the plain rebuilding the setup from compete scratch. The concernedness more comes from the point of realising just how old the drives in use are, and the fact that sods law something will fail at the most inopportune time. Previously Id thought they were considerably newer, when I saw the power on hours however it was one of those moments where even if they're not worried, I was considerably more so as that just screams one powercut or the likes and the entire lot goes, and then I have the fun job of helping them get it back up at a weekend etc.

As I've got the hardware sitting there for a overhaul, just seems like a good time to do it :) just a case of whether I duplicate the system on a newer version of OMV, or whether one of the alternatives
 
Last edited:
Ok I will add Unraid to my list then given what you mention and have to do some further research there. I will also have to do some further research on the Marvell controller, and also whether that is Unraid or Linux in general. Some of the older Marvell sata controllers were plain bad but I heard this one was OK.

I could always put the hotswap and ODD on the Marvell also, leaving the primary 3TB drives, 4TB drive and a parity drive on the Intel PCH, without needing to worry about it, but would prefer to leave the ODD on Intel to allow it to be used for boot as it looks like the motherboard doesn't allow boot from the Marvell.

The drives I have are CMR, which is one headache resolved before it began :)

Thank you both for your feedback!

Anyone have any other reasoned suggestions/alternative opinions and why? :)
 
Last edited:
You can mount it and access it, but TBH you are better copying the contents to the pool and then adding the drive to the pool once the data is off it or it’s just an unprotected mount for no obvious reason? Marvel controllers are on the NFC chance for me… an HBA is what £15? and provides 8 SATA ports with a pair of breakout cables, what’s the end game here? If it’s not choosing appropriate and stable/well proven hardware then go for it, but I wouldn’t.
Yeah I am thinking I'll relegate the Marvel controller to ODD and hotswap drive bay as that at least won't be for anything long term. Also fair point regards the pool haha

The reason I'd wanted to keep it separate is it's a larger, faster, higher performance 7200RPM drive, so would be better for anything with lots of little files or with any latency sensitivity etc, as opposed to the 5400RPM reds.
 
Last edited:
I switched to the most recent OMV - Its MUCH better now - Runs all my dockers / shares / Everything my ESXI baremetal build did, from Radarr to home assistant and shinobi (Installed it on an nvme drive in my Proliant DL380 gen 9 LFF) Works like a charm with the 12 data drives passed through.
What were you using before the latest OMV?
 
Nothing particularly right now, but as I'm not going to be the one using the server I was trying to at least give options and wouldn't put it past my brothers to put a steam drive on there for example :)
You are right of course that compared to an SSD even a velociraptor 10k drive seems downright archaic.

I am perhaps just being overtly conscientious about the performance difference between the 5400rpm and 7200rpm drives, bearing in mind it'll be in a NAS anyway. :)
 
Last edited:
Guys, sorry for being a complete NAS noob here, but I was thinking if I did not go for Unraid, I would likely end up with a ZFS Raid Z1/Z2 pool (especially as I technically have 8 3TB WD Reds and could just throw all of them in to give 7+1/6+2) however I am seeing very mixed messaging on whether I would expect to see read and write speeds close to the speed of the slowest drive, or similar to a reasonable portion of each added together (assuming had a NIC fast enough for it to matter). This is relevant incase I throw in a 2.5gbe nic for 'future proofing' down the line as believe at least one of the machines there already has 2.5gbe, and it'd be nice to see real world 115MBps out of Gbe and close to 300MBPS out of 2.5Gbe, at least for larger sequential files like bluray backups over wired.

Which is actually correct?

This caught me out off guard as I had thought part of the benefit of the pooling was parity drives for resilience but performance similar to raid with portions striped across devices, and I had seen 100MBps out of raw samba NTFS/EXT4 shares when I tested the old server. I certainly hadn't expecting the ZFS solutions to be slower, given I had thought that was part of the point.

Again sorry for asking fairly nooby questions but a lot of the boards I've been reading seem aimed more at Linux and NAS admins, as opposed to consumer gear enthusiast users like me, and getting straight answers without a lot of command line Linux involvement seems difficult.

This is one area where tools which primarily make things easier to configure via GUI (like Unraid) will be very helpful!

Once I have finished doing sector scans followed by shred os on all the drives, I may have to do some testing!

@Avalon - sorry forgot to respond to your point, I'm aware of that issues with QLC, prefer to avoid it.
 
Last edited:
Thanks again for all the advice guys,
I'm currently in the process of running checks over all the disks prior to building the new server, via a combo of disc sector scan, and also running ShredOS over them all to make sure they complete no errors or incredibly slow.

One of the newer drives ironically is potentially failing the SHREDOS test, as the rest of the disks all sustained 100MBps+, whilst one of the newest of the 8 drives is actually only hitting around ~80MBps, so this one may get relegated or at least retested solo incase its a bad cable etc, even if it doesn't error, if it has considerably slower performance, I cannot see that being good even as a parity drive? (correct me if I'm wrong as if it has a use - great!)

I am weighing up the options of trying to find a cheap 8 way SAS card to connect the 8 WD Red SATA drives (seems to be ~40-50 atm with cables from Europe, as reading a lot of stories of fake out of China), and potentially throw in a small pool of SATA SSDs as cache or just a speed pool. I assume UnRAID and most of the modern Linux NAS OS are TRIM aware (as long as I use the Intel 3/6Gbps controller for those).

OS would just be installed on a USB stick, the machine is going to have 24GB of RAM, so hoping that'll be plenty even with a USB install; I know when I used OMV previously it worked fine, and the system I'm overhauling seems to be the same :)
 
Last edited:
May well do that in the future, but right now I'm working on the budget I can get allocated from the family and a bit of my own, and limits to how much I can throw :)
 
The machine I'm working with does not have NVME (unless I got a PCI-E card, which could be hit and miss due to lack of BIOS support), but the plans I'm currently working on are:

3570K
Z77X-D3H (https://www.gigabyte.com/Motherboard/GA-Z77X-D3H-rev-10/sp#sp)
24GB DDR3 1600 (its been memtested for literal days with no errors so I'm comfortable, even if its officially 1333MHz RAM, hoping UNRAID will do something useful with the volume of memory...)
8x WD Reds 3TB via a HBA card [any recommendations on a cool running HBA card based on current pricing as they seem to be £50 in many cases unless out of China]?
1x 4TB Toshiba 7200RPM as Parity for above connected to Intel PCH 3Gbps

It'll be on a Corsair 550W power supply which already has 7x SATA connectors, and then using a few extra MOLEX:SATA adaptors, should be enough :)

The GBE LAN is apparently Atheros, if you think that'll cause issues, let me know.

If I go for the 12 drive Unraid license, I'd be in with 1 to spare after adding 2x SSD [this has been temporarily delayed] (I'm assuming parity, and cache drives are counted towards the limit) and in future if I can consolidate drives, then that can be done :)

I have hotswap bay and the ODD connected to the less reliable 6GBps Marvel controller as that way anything connected there would literally be transient and not critical.
 
Last edited:
I've been testing the server for the last few weeks, giving it soak/run testing etc. One of the 8 WD Reds I wasn't happy with from the get go so it never made it into the machine.
Unfortunately it looks like another one might be on the way out as I got a whole heap of udma CRC errors overnight during the scheduled parity check, and that drive has been disabled by unraid, so I'll have to look at that today. Also possible a bad cable connection or a bad SAS cable. Yay. Fun. Also starting to think I should have stuck stickers with the serials on the back of the drives to make finding them later easier!

Liking Unraid so far, it's explanatory enough that someone with decent computer knowledge can make it through, although I kinda wish some of the 'this has happened, we suggest doing this' was build more into the OS than going and reading the manual etc :)

I was running one of the reds as a parity drive so that one will likely get sacrificed to the array, assuming the drive with the errors has indeed failed and it's not just a slightly iffy connection.
 
Last edited:
The drive is now behaving and passes smart tests ok etc, apart from those crc errors it got within a short space of time. I've recheck all the connections and seating.

That particular HBA cable was a cheap one so I'm rebuilding the array now, but have ordered a replacement cable to rule that out.
 
Back
Top Bottom