Home NAS Overhaul - OS Advice

Soldato
Joined
26 Apr 2004
Posts
9,584
Location
Milton Keynes
Hi Guys

Hoping you guys might be able to give me a bit of advice, based on your own experiences, as whilst I do have SOME NAS experience, it's really not ALL that much.

I'm currently in the middle of prepping for a NAS upgrade for family, as the drives are so damn old I'm terrified they'll fall over at a sneeze (oldest has ~11 years of power on hours...). In line with this, my current plan is to leave the current NAS running UNTIL the new NAS is running OK and data has been successfully migrated, so if anything goes wrong, it's not game over. Safer to duplicate first, tamper second! The old NAS was also not TERRIBLE but nothing special, i3-2110, and 8GB of RAM, utilising 4 internal 2TB drives, 1 internal 4TB drive, and an external 500GB drive.
Thanks to some members here, I've managed to put together an i5-3570k, 24GB of DDR3 running at 1600MHz (the RAM is 1333MHz but runs fine at 1600MHz, I have literally run memtest for days on end on the thing across multiple sessions of multiple days at a time to ensure the memory is not generating ANY errors), and 8x 3TB drives. They're used, but the oldest of them is a less than a 3rd of the age of the current oldest drive, and half of them are even younger, so feel like they have a few years in them at the least, especially as this time around they're older CMR WD Reds NAS drives which seem to be fairly well known for reliability. By contrast the old drives are quite literally JBOD! I am however planning to pull the 4TB drive from the old NAS and just move it over, as it appears to be considerably newer than most of the other drives, and likely has a bit of life left in it. It'll take me at least a few days to get through doing a surface/sector check on all the drives one by one just to be sure they're 100%, but hoping to get started on the build proper end of the week.

The old NAS OS is OpenMediaVault (albeit a much older version, I believe 3/4, and it seems to be on 6 now), using a combo of EXT4 and NTFS formatted drives. They are being shared over a home network only via SMB.
As I've used OMV myself before in the past, my first thought is to just use that again, as at least I should be able to get my head around it if the current version hasn't changed too much since the last version I used.
However, as this machine is now rocking a quad-core, and a LOT more memory, I wondered if maybe it'd be worth looking at a more powerful solution as OMV at least in the past was more about the basics done well, relatively simple and relatively light, I see Rockstor and TrueNAS also seem to get pretty decent ratings?

To help narrow down, ideally what I'm looking for is:
1) Something relatively easy to administer/maintain/setup, I've been a computer hardware enthusiast for decades at this point, but have never been big on networking/NAS/Linux, so ideally easy to install and GUI all the way, no/absolute minimal Linux CLI required.
2) Samba shares, so windows has no issues accessing (configured as full guest access so no login/password required)
3) Best performance possible, considering SAMBA
4) Ideally, I'd like to use the 4 youngest of the 8 newer HDDs, to essentially 1:1 replace the 2TB drives of the old server with larger 3TB drives and absorb the 500GB into the additional capacity of these drives to reduce the amount of disks in use at once for power efficiency and SATA connection limitation purposes, whilst still offering a noticeable uptick in capacity (extra 3TB roughly after absorption of the 500GB previously on the external USB drive)
5) The remaining 4 drives would be either kept around as spares, OR dependent on the complication of doing so and the OS used, maybe the first 4 drives could be setup as a pool and the 1 or more of the extras used as parity/mirroring drives to prevent data loss if a either a data or parity drive fails and that is easy to setup?
The new machine has 8 total SATA (https://www.gigabyte.com/Motherboard/GA-Z77X-D3H-rev-10#ov) 2x SATA 6Gbps/4x SATA 3Gbps Intel PCH, 2x 6Gbps via Marvell 88SE9172 so was thinking the 4 main 3TB could all go on the SATA 3Gbps ports for a combination of tidiness and just being relatively easy to work out what was connected to what at a later date, the 4TB could go on a PCH 6Gbps along with the ODD, and that leaves the 2 Marvell 6GBps (apparently this controller is actually fine unlike some of the earlier Marvell SATA controllers) for either parity drives or so I could wire up the case's front external Hotswap SATA bay.
6) The 4TB from the old NAS (EXT4 formatted) would be transferred over wholesale (once the new server is working and the 2TB/500GB drives have all been duplicated to the new servers 3TB drives). I'd like this drive to just be picked up and usable by the new NAS with no reformatting etc, just pop in, setup up the SAMBA share on the pre-existing data and move on, independent from whatever the solution is for the 3TB drives.
7) Plan is for OS to live on a USB stick, usually isn't an issue for NAS stuff, but wanted to mention just incase.

Thoughts?
 
Last edited:
Just put the OS on a small SSD. I must question your using old drives. Really, how much is your data worth?

I recently built myself a small NAS with 5x 20 TB drives running RAID Z2 on TrueNAS Scale. I have a 250 GB SSD boot drive and a 1 TB cache NVME drive. I'm probably going to recase it because the cache drive is overheating. Right now it's hosting my backup and video I can afford to lose.
 
It's a home NAS filled with the likes of thier CDs, movies etc, nothing on there is crucial (or work/profession related) which is why the current drives are so old (average is around 9 years power on time), and they cannot afford to throw hundreds on new HDDs and a brand new rig, but as I have the hardware now to upgrade the core server platform and swap the drives to considerably newer ones that are aimed towards reliability and NAS usage (rather than being normal desktop drives), with a capacity bump in top, it just seems like a no brainer. The current setup is on very borrowed time after all, with only the 4TB drive which is the youngest not being replaced.

Just trying to work out the best OS to go with based on the above points, as right now all the drives are standalone, but I can definately see the benefit of an OS that'd make better use of that RAM and additional processing, and also the benefits of the disks being pooled into one share with an additional drive or two just to give some additional resilience compared to what was in place before.

Regards putting the OS on an SSD, I could do that 100% given how cheap they have gotten, but it'd tie up a SATA connection and power connection that I'd rather leave free, and as the OMV/USB boot drive is really literally only used during boot/configuration changes/updates and everything loads into ram, it's not caused any issues thus far.
 
Last edited:
If as you state, it’s mainly media, consider UnRAID. You get the option of dual parity, SSD cache, docker and VM support and well proven and ongoing updates and cheap expansion at a later date if required. What it isn’t suited to is direct IO to the array. ZFS has its advantages, but nothing you have said suggests your intended usage would really benefit from ZFS or the complications it brings.

One thing that does concern me, despite saying the data is unimportant, you seem to be very concerned about preserving it. If it’s important enough to worry about, it’s important enough to back up. RAID or other redundant storage is not in itself a backup.
 
I did look at Unraid but largely discounted it due to the fact it's a paid system and there are a decent number of free alternatives which get good feedback, and I heard due to the parity equations it runs, that performance for writes could tank a bit, which could get frustrating.

In all seriousness, the data that NAS holds isn't crucial, it'd just be hassle to re-rip and restore the stuff they care about using day to day or the plain rebuilding the setup from compete scratch. The concernedness more comes from the point of realising just how old the drives in use are, and the fact that sods law something will fail at the most inopportune time. Previously Id thought they were considerably newer, when I saw the power on hours however it was one of those moments where even if they're not worried, I was considerably more so as that just screams one powercut or the likes and the entire lot goes, and then I have the fun job of helping them get it back up at a weekend etc.

As I've got the hardware sitting there for a overhaul, just seems like a good time to do it :) just a case of whether I duplicate the system on a newer version of OMV, or whether one of the alternatives
 
Last edited:
Over the years, I will have spent tens of thousands of pounds on software, everything from shareware back in the Amiga days to commercial software costing thousands per licence. UnRAID is without doubt the best value for money OS I have purchased. Ever. One fee, free updates for what must be approaching two decades at this point and a steadily growing feature set/active development. The only thing that pushes it to second overall is I literally made hundreds of thousands of pounds commercially using Corel Draw. I honestly can’t think of a better free or commercial option for what amounts to media storage and the usage you describe or offers the flexibility/ease of expansion. I mean if your usage scenario is different ti what you’ve said then feel free to say so, but based on what you describe I would suggest using the free trial as it’s the logical option.
 
Unraid user for 5+ years here having upgraded from WHS 2011, I have 3 licences. Main, backup and one I use for testing, random stuff etc.

Unraid parity shouldn't be a problem, with an Xeon 1225 V3 , ~ I5 4570 I have no issues saturating a Gbit link in either direction with dual parity
The only issue I see for performance is streaming high bit rate 4k while dumping data at Gbit speeds to the same drive.
If it's just general downloads , newsgroups, torrents etc. then unless you have a gigabit internet link, you won't be pushing too much bandwidth and if it's really an issue you can write to the cache and mirror to the array overnight in idle time.

The way parity operates is pretty flexible, even if you lost a data drive and two parity drives, the data on the other drives is fully readable (in Linux) so you only need to recover the one failed data drive.
If you still have parity you can simply rebuild.

ZFS does have some advantages with speed and correction of bit rot, but for a home media server, who cares if a single pixel in a single frame is not quite the right shade.
Touch wood ... I'm yet to have any errors detected during a parity check on my main server. One drive threw a wobble on the backup but I just dropped in a replacement and let it rebuild.

If you have photos / media you really can't afford to lose then you should have multiple independant backups.
ZFS is no benefit if your PSU fries the whole system, memory fails and causes corruption or it gets stolen etc.

So another recommendation for Unraid here... budget allowing as you would need to spring for the Pro version with 12 drive support.
Simply speaking, time is money and I've had to spend little time managing Unraid over the years which I count as a win.



Other notes.

2 of the drive slots on your motherboard are Marvel, these can be unreliable with unraid and likely Linux in general.
Inexpensive PCIE 1x card for @£10 gets you 2 slots back or you can get 5-6 drives on a PCI-E 4x card.

You seem aware of CMR drives, SMR are unusuable for ZFS
 
Ok I will add Unraid to my list then given what you mention and have to do some further research there. I will also have to do some further research on the Marvell controller, and also whether that is Unraid or Linux in general. Some of the older Marvell sata controllers were plain bad but I heard this one was OK.

I could always put the hotswap and ODD on the Marvell also, leaving the primary 3TB drives, 4TB drive and a parity drive on the Intel PCH, without needing to worry about it, but would prefer to leave the ODD on Intel to allow it to be used for boot as it looks like the motherboard doesn't allow boot from the Marvell.

The drives I have are CMR, which is one headache resolved before it began :)

Thank you both for your feedback!

Anyone have any other reasoned suggestions/alternative opinions and why? :)
 
Last edited:
I’d echo the comments of unraid and its price. I held off for a long time with paying it, but it was worth it in the end and I wish I had done it sooner.

I’d previously tried a Linux server build running on bare metal with zoneminder cctv and zfs shares. Worked well but was a faff to setup.

Unraid is point and click and so much easier to configure /update and manage.

I thought about TrueNas at the time also but opinion seemed to favour Unraid for cctv dockers which was a core need for me.
 
You can mount it and access it, but TBH you are better copying the contents to the pool and then adding the drive to the pool once the data is off it or it’s just an unprotected mount for no obvious reason? Marvel controllers are on the NFC chance for me… an HBA is what £15? and provides 8 SATA ports with a pair of breakout cables, what’s the end game here? If it’s not choosing appropriate and stable/well proven hardware then go for it, but I wouldn’t.
 
Last edited:
I switched to the most recent OMV - Its MUCH better now - Runs all my dockers / shares / Everything my ESXI baremetal build did, from Radarr to home assistant and shinobi (Installed it on an nvme drive in my Proliant DL380 gen 9 LFF) Works like a charm with the 12 data drives passed through.
 
You can mount it and access it, but TBH you are better copying the contents to the pool and then adding the drive to the pool once the data is off it or it’s just an unprotected mount for no obvious reason? Marvel controllers are on the NFC chance for me… an HBA is what £15? and provides 8 SATA ports with a pair of breakout cables, what’s the end game here? If it’s not choosing appropriate and stable/well proven hardware then go for it, but I wouldn’t.
Yeah I am thinking I'll relegate the Marvel controller to ODD and hotswap drive bay as that at least won't be for anything long term. Also fair point regards the pool haha

The reason I'd wanted to keep it separate is it's a larger, faster, higher performance 7200RPM drive, so would be better for anything with lots of little files or with any latency sensitivity etc, as opposed to the 5400RPM reds.
 
Last edited:
I switched to the most recent OMV - Its MUCH better now - Runs all my dockers / shares / Everything my ESXI baremetal build did, from Radarr to home assistant and shinobi (Installed it on an nvme drive in my Proliant DL380 gen 9 LFF) Works like a charm with the 12 data drives passed through.
What were you using before the latest OMV?
 
Yeah I am thinking I'll relegate the Marvel controller to ODD and hotswap drive bay as that at least won't be for anything long term. Also fair point regards the pool haha

The reason I'd wanted to keep it separate is it's a larger, faster, higher performance 7200RPM drive, so would be better for anything with lots of little files or with any latency sensitivity etc, as opposed to the 5400RPM reds.
What’s the specific workload you have in mind that’s latency sensitive? It’s a spinning drive on a NAS, it doesn’t matter if that’s 5.4K or 7.2K, it’s still going to suck. If you want low latency, then SSD/NVMe is the way to go. For example I would run dockers and VM’s off either a dedicated SSD or more likely a shared NVMe (but not QLC unless you understand the implications) cache drive.
 
Nothing particularly right now, but as I'm not going to be the one using the server I was trying to at least give options and wouldn't put it past my brothers to put a steam drive on there for example :)
You are right of course that compared to an SSD even a velociraptor 10k drive seems downright archaic.

I am perhaps just being overtly conscientious about the performance difference between the 5400rpm and 7200rpm drives, bearing in mind it'll be in a NAS anyway. :)
 
Last edited:
Guys, sorry for being a complete NAS noob here, but I was thinking if I did not go for Unraid, I would likely end up with a ZFS Raid Z1/Z2 pool (especially as I technically have 8 3TB WD Reds and could just throw all of them in to give 7+1/6+2) however I am seeing very mixed messaging on whether I would expect to see read and write speeds close to the speed of the slowest drive, or similar to a reasonable portion of each added together (assuming had a NIC fast enough for it to matter). This is relevant incase I throw in a 2.5gbe nic for 'future proofing' down the line as believe at least one of the machines there already has 2.5gbe, and it'd be nice to see real world 115MBps out of Gbe and close to 300MBPS out of 2.5Gbe, at least for larger sequential files like bluray backups over wired.

Which is actually correct?

This caught me out off guard as I had thought part of the benefit of the pooling was parity drives for resilience but performance similar to raid with portions striped across devices, and I had seen 100MBps out of raw samba NTFS/EXT4 shares when I tested the old server. I certainly hadn't expecting the ZFS solutions to be slower, given I had thought that was part of the point.

Again sorry for asking fairly nooby questions but a lot of the boards I've been reading seem aimed more at Linux and NAS admins, as opposed to consumer gear enthusiast users like me, and getting straight answers without a lot of command line Linux involvement seems difficult.

This is one area where tools which primarily make things easier to configure via GUI (like Unraid) will be very helpful!

Once I have finished doing sector scans followed by shred os on all the drives, I may have to do some testing!

@Avalon - sorry forgot to respond to your point, I'm aware of that issues with QLC, prefer to avoid it.
 
Last edited:
No - legitimate - NAS distro I have used in the last decade requires you to make extensive use of CLI/terminal. Is it easier sometimes if you know what you’re doing? Yes.

You seem to worry about a lot of fringe hypothetical examples that leave me scratching my head about why. You have - at best - 21TB of usable space with 1 parity drive using your 8x3TB drives. Worrying about the time to copy a BR rip over seems like something you will literally be doing once in bulk, so it’s not something normally anyone would get excited about, but yes, any action involving the pool requires *all* drives be spun up and you are going to be limited by the slowest link in the chain.

What you absolutely should be considering is a 5.4K WD Red 3TB is 5w (pedants: I like round numbers…), 8 of them is 40w, Chuck the extra 4TB 7.2K in and we’re getting close to 50w, which is about what the rest of a system of that vintage will draw, 100w is just under £300/yr in power at 34p/KWh. Now my boxes tend to constantly be doing ‘stuff’, even if nobody is accessing anything (rare), they’ll still be working hard. With ZFS on your system doing RAR/PAR work, perhaps some transcoding - because you’re running Plex or whatever in this and we didn’t allow for a GPU to do the heavy lifting, you’re peaking around 130w and £300/yr is now £350-400 as re-silvering is also a thing.

If you go ZFS you should consider a UPS, also consider that ECC is best practice. Then you need an expansion plan, as clearly you will run out of space sooner rather than later, will you add another VDEV? Replace costing drives one by one and expand that way? The one thing going in your favour is that you can it single drives bigger than your pool size, so that at least makes it easy to escape at that point. Neither way is cheap, or quick or easy and you won’t be the first person to find that out the hard way.

Now consider that UnRAID would only spin up a single drive (5w) for media to be read and the parity only spins up for writes (10w), and even then because you use an NVMe cache drive, only once every few days or as required if it fills up. Expansion is as painless as adding an extra drive to the pool, or choosing a drive to remove and erring it do its thing. New drives can be the best £/TB as you need them, not bought in bulk/matching sizes and praying the batch isn’t a problem.

I know which option seems obvious to me based on what you actually say will be happening, but it’s your choice. I have been down this and many related rabbit holes multiple times, I have the half rack of servers and disk shelves to prove it and personally I would be going UnRAID, ideally with a chunk of NVMe and modern large drives for a predominantly static media collection. Anything else is going to cost you hundreds extra every year in power and even more trying to build your way out to expand. You can have cheap, quick or easy - you only get to pick two (at most).
 
Back
Top Bottom