Bit by the home server bug

Soldato
Joined
23 Jul 2009
Posts
14,124
Location
Bath
I was after a project so I cobbled together an old m-atx board, i3, and 8gb ram. Bought 3 4tb ironwolf drives and stuck trueNAS on it. So I figured that 8TB of usable storage would be plenty, but I've been having all kinds of fun with it and am already down to 3TB remaining.

Since I'm running raid z1 that means adding another 3 drives, and maybe an SSD for cache.

So I'm looking at getting a node 804 to house all of that, but I'm not sure how to get some many drives hooked up. I have 4 6gbps sata ports, all in use, so do I spent loads on a 8 port matx board, cpu and RAM (I mean, I throw money at projects, but I like to do it a bit at a time so that would be a bit much). Is there a way to add more ports? I've seen some cheap pcie expansion cards, and my pcie x16 is free. No m.2 on this board alas.

Recommendations?
 
Ah cool, I read a bit about these SAS cards. How does one know if they will work with trueNAS (just googling around, or specific features/spec?)
 
Have you looked at HBA's in IT mode? Even a cheap 2 port with suitable breakout cables gives you 8 usable SATA ports for next to nothing at the expense of a PCIe slot. Somewhere also has the 804's on sale at present for a lot less than usual if you go looking.
Yeah what is this IT mode? I was checking the bay for HBAs and saw that mentioned a lot
 
IT mode = Initiator Target mode. Basically it presents each disk directly to the OS, like a "dumb" sata card, rather than applying any RAID mode.
(which is what you want for software raid like ZFS/TrueNAS, so that the OS knows exactly what is going on with the disks, rather than the RAID card doing it's own 'magic')
Okay rad, think I know what I'm doing now.

Next stop is to figure out if I should be making bigger pools instead of having to split my media across multiple pools. I mean, I'd save a couple TB if I just moved my system backups I guess, but raid z1 ain't like unraid I hear so I can't just plop extra drives in.
 
Have you looked at unraid? It's pooling technology is more suited to home use (generally read orientated), it would allow you to use your largest or two largest drives for parity, but you can throw drives of any size into the pool.
Yeah, I found that out AFTER I had gone to all the effort of seeing up trueNAS, setting up all the media shares, permissions, radarr etc so I'm loathe to go through that again.

In retrospect unraid probably would have been the better choice, but I quite liked the idea of the performance benefits of raid z1 while still getting redundancy built in. I just didn't expect I would do so much with it and start filling drives this quickly. So I think I'm sorta locked into buying three drives at a time now
 
Don't do it unless you are willing to pay the extra for elec bills. The increase means more people are moving away from home labs.
A) it's a bit late for that
B) I don't mind spending money on my hobbies, and this is more of a fun project to learn networking and stuff so I'm not worried about a couple quid extra on the electric
 
The longer you ignore it, the more difficult and expensive it becomes to fix. I say this as someone with 4x24 bay disk shelves and two servers with 12-16 bays each as well. Also consider if rclone/mergefs and cloud storage may serve your needs better (generally depends on the speed of your connection).
Rclone is interesting, although mergerfs sounds a bit above my skill set.

I was looking into my options with trueNAS and ZFS for expanding storage and it seems there are a few ways I can go about expanding storage of a pool:

1. Increase disk density by swapping in disks of higher capacity, letting the vdev resilver and repeating until I have essentially taken my 3 4tb drives and swapped them out with 8tb drives. Expensive and has a point at which density gets too high and the risk of more than one drive failing before the vdev resilvers is quite high.

2. Add a new 3 disk vdev to the pool. Risky in that if I lose one vdev the whole pool is toast. That means two drive failures in one vdev could kill the whole pool.

3. Copy the entire pool to another site, destroy the vdev and make a new raid z2 with 6 drives. Means any 2 disks can die and still recover the pool. Mad effort. Don't currently have any storage big enough to back up the pool to (dw nothing critical, just media mostly, and important photos are backed up off site on cloud).

4. Trash the whole thing and start over with unraid. Have to relearn everything and set up all the stuff I set up again, but more flexible for adding storage piecemeal to keep capex lower or less chunky anyway (need to learn what limits there are on this adding of disks). Reduced performance vs raidz, but I'm bound by my network upstream anywhere but my house so not really a huge deal.
 
With UnRAID you can have a single storage pool (uptown 30 devices, so 28 data, 2 parity), multiple cache pools and/or dedicated drives. So for example high IO or dedicated VM's, docker etc. usually point to your cache pool/cache drive, this could be SSD or HDD, or you can pass a dedicated SSD/HDD through. Drive failure wise, even if more drives fail than you have parity drives to compensate, the drives are still individually readable, so data loss would be limited to whatever was on whatever failed above your parity count (standard warning, RAID or unRAID is not a substitute for a backup, but you seemingly already know that). Multiple storage pools is on the roadmap and multiple cache pools is now also a thing. In terms of learning it all again, i'd suggest having a look at the youtube stuff, it's pointless me telling you something is easy, but it's easy :D

In terms of advice I would offer based on where you are now and ignoring the above, you have painted yourself into a proverbial corner. 1-3 are all compromises and some of them genuinely are horrible, all 3 require significant investment in additional drives and all you're doing is extending the period before you are in the same situation again. Right now your exit strategy is a single 8TB+ drive, it's cheap, reasonably quick and simple. If you go 1-3, you are going to need multiple drives and be in exactly the same situation in xx months (if you're lucky), but with a bigger expense and more complicated exit strategy. Again, it appears based on what you describe that you chose a file system that doesn't suit your needs or your pocket, the sooner you actually deal with the issue, the cheaper and less painful it will be.

Good luck!
Thanks that sounds like good advice. I'll read up on unraid and will likely take the plunge. As you say, better now than later! Need to learn how parity works in unraid (eg I'm not sure how 2 disks could offer parity to 28 others unless they were massive disks). I have 2.4tb left on my 8tb pool so I think I'll take a little breather to at least "live with" my little home server for a bit before deciding if I want to invest more into it long term in both time and money. It is a lot of fun though!
 
In terms of the easiest way out of your current situation, buy a drive larger than the data you have on the ZFS array, basically whatever has the best £/TB, but generally bigger is better, copy and verify all the data to the new drive, break your ZFS volume and add the drives to UnRAID. Personally I would avoid parity at this point, it'll speed up the write process significantly and you have a copy anyway. Once the data is over (and verified), add the new drive to UnRAID as a parity drive and let UnRAID build the parity. You end up with 12TB of space with 1 parity drive, anything the same size or smaller than the parity drive that you add can either be storage or parity.

Thanks for the explanation, I did some reading and understand how it essentially sums rows of bits to work out what should be there so that makes sense now. So with the max 2 parity disks you can only have 2 disks fail, and frankly if I find myself storing the kind of data I can't afford to lose on this, then I think I'll have already made a mistake.

Your suggestion above makes good sense and seems like a fairly low effort way to migrate while also allowing myself room to go for higher density drives in the future as well. So if I understand it:

1. get an 8TB diskand copy all the data from my ZFS pool to it
2. destroy ZFS pool
3. Set up non-parity pool on unraid and copy data from the 8TB disk to avoid slowdown from parity writes
4. format 8TB drive and add it to the Unraid pool as a parity disk.
5. ride into the sunset?

This approach is nice since I still get an extra 4TB of usable space from adding a 8TB drive, compared to getting 8TB for the price of 3 4TB drives on ZFS. And then I can just pop in extra disks whenever

So the 1TB HDD I'm currently using as my truenas boot drive is no longer needed as it seems I can (must?) boot unraid from USB? That is not a big deal, but just checking as it seems weird to trust a flash drive with the OS.

How bad an impact does parity writing add in comparison to ZFS? like 10% or 50% perf impact? Presume this is why people use cache pools to mitigate this, but what I read about the cache pool is that it just migrates the data over on a schedule rather than trying to empty itself in realtime, which seems less than ideal. Right now, for instance, I am reencoding the audio on my entire media library from DTS to AAC (thanks Samsung), and I'm actually doing the encoding on my desktop straight from the server. While absolutely inadvisable, it is actually going really well. In an ideal world I'd have used tdarr but there isn't a freeBSD version. I suspect in the future I will just rip my blurays into AAC, but the idea of having a write performance hit while performing a task like that is a bit off-putting.
 
Last edited:
You don't need to format the drives - unraid will do that, yes USB boot, I ran without failure for something like a decade? In terms of encoding your audio, why not just use Plex/Emby/JellyFin and do it in real time when required? Audio transcodes use next to nothing in terms of CPU. Writes are likely to be 50-60MB/s, but you can do various things to improve/work round that.
It's an issue that affects plex specifically when transcoding audio to AAC from DTS while direct playing video (eg when playing to my Samsung TV which can't handle DTS). So a rather specific bug, but one affecting anyone with a Samsung TV running plex MS after version 5713 apparently. I just figured I would use ffmpg to batch encode the audio rather than hang about waiting for a fix. The audio works fine if I transcode the whole file but 4k stuff is too much for my old i3. 1080p is just about okay but only barely. Now it all works fine on direct play. Reckon I can live with 50-60MB/s as my gigabit switch basically caps me at around 120MB/s now anyway. I mean 50% is a bit rubbish but the benefits outweigh the costs here as its mostly read not write.
 
Back
Top Bottom