Bit by the home server bug

Soldato
Joined
23 Jul 2009
Posts
14,124
Location
Bath
I was after a project so I cobbled together an old m-atx board, i3, and 8gb ram. Bought 3 4tb ironwolf drives and stuck trueNAS on it. So I figured that 8TB of usable storage would be plenty, but I've been having all kinds of fun with it and am already down to 3TB remaining.

Since I'm running raid z1 that means adding another 3 drives, and maybe an SSD for cache.

So I'm looking at getting a node 804 to house all of that, but I'm not sure how to get some many drives hooked up. I have 4 6gbps sata ports, all in use, so do I spent loads on a 8 port matx board, cpu and RAM (I mean, I throw money at projects, but I like to do it a bit at a time so that would be a bit much). Is there a way to add more ports? I've seen some cheap pcie expansion cards, and my pcie x16 is free. No m.2 on this board alas.

Recommendations?
 
I added a Silverstone SST-ECS04 to my server - which added 8 sata ports. There's loads of alternatives out there - especially second hand entreprise cards. Just get one that plays nice with freenas
 
Ah cool, I read a bit about these SAS cards. How does one know if they will work with trueNAS (just googling around, or specific features/spec?)
 
Perfect, thanks that is super helpful. Saves me upgrading everything at once. Found a few of these HBA cards on ebay for like 60 quid so that is a bargain as far as I'm concerned.
Yeah that's about what you'll be looking at for a decent one. There's a few RAID cards you can flash to HBA mode as well if you can't find anything that fits the bill (you don't need hardware RAID as ZFS will take care of that for you)
 
Have you looked at HBA's in IT mode? Even a cheap 2 port with suitable breakout cables gives you 8 usable SATA ports for next to nothing at the expense of a PCIe slot. Somewhere also has the 804's on sale at present for a lot less than usual if you go looking.
 
Have you looked at HBA's in IT mode? Even a cheap 2 port with suitable breakout cables gives you 8 usable SATA ports for next to nothing at the expense of a PCIe slot. Somewhere also has the 804's on sale at present for a lot less than usual if you go looking.
Yeah what is this IT mode? I was checking the bay for HBAs and saw that mentioned a lot
 
Yeah what is this IT mode? I was checking the bay for HBAs and saw that mentioned a lot

IT mode = Initiator Target mode. Basically it presents each disk directly to the OS, like a "dumb" sata card, rather than applying any RAID mode.
(which is what you want for software raid like ZFS/TrueNAS, so that the OS knows exactly what is going on with the disks, rather than the RAID card doing it's own 'magic')
 
IT mode = Initiator Target mode. Basically it presents each disk directly to the OS, like a "dumb" sata card, rather than applying any RAID mode.
(which is what you want for software raid like ZFS/TrueNAS, so that the OS knows exactly what is going on with the disks, rather than the RAID card doing it's own 'magic')
Okay rad, think I know what I'm doing now.

Next stop is to figure out if I should be making bigger pools instead of having to split my media across multiple pools. I mean, I'd save a couple TB if I just moved my system backups I guess, but raid z1 ain't like unraid I hear so I can't just plop extra drives in.
 
Okay rad, think I know what I'm doing now.

Next stop is to figure out if I should be making bigger pools instead of having to split my media across multiple pools. I mean, I'd save a couple TB if I just moved my system backups I guess, but raid z1 ain't like unraid I hear so I can't just plop extra drives in.

Have you looked at unraid? It's pooling technology is more suited to home use (generally read orientated), it would allow you to use your largest or two largest drives for parity, but you can throw drives of any size into the pool.
 
Have you looked at unraid? It's pooling technology is more suited to home use (generally read orientated), it would allow you to use your largest or two largest drives for parity, but you can throw drives of any size into the pool.
Yeah, I found that out AFTER I had gone to all the effort of seeing up trueNAS, setting up all the media shares, permissions, radarr etc so I'm loathe to go through that again.

In retrospect unraid probably would have been the better choice, but I quite liked the idea of the performance benefits of raid z1 while still getting redundancy built in. I just didn't expect I would do so much with it and start filling drives this quickly. So I think I'm sorta locked into buying three drives at a time now
 
Don't do it unless you are willing to pay the extra for elec bills. The increase means more people are moving away from home labs.
A) it's a bit late for that
B) I don't mind spending money on my hobbies, and this is more of a fun project to learn networking and stuff so I'm not worried about a couple quid extra on the electric
 
Yeah, I found that out AFTER I had gone to all the effort of seeing up trueNAS, setting up all the media shares, permissions, radarr etc so I'm loathe to go through that again.

In retrospect unraid probably would have been the better choice, but I quite liked the idea of the performance benefits of raid z1 while still getting redundancy built in. I just didn't expect I would do so much with it and start filling drives this quickly. So I think I'm sorta locked into buying three drives at a time now

The longer you ignore it, the more difficult and expensive it becomes to fix. I say this as someone with 4x24 bay disk shelves and two servers with 12-16 bays each as well. Also consider if rclone/mergefs and cloud storage may serve your needs better (generally depends on the speed of your connection).
 
The longer you ignore it, the more difficult and expensive it becomes to fix. I say this as someone with 4x24 bay disk shelves and two servers with 12-16 bays each as well. Also consider if rclone/mergefs and cloud storage may serve your needs better (generally depends on the speed of your connection).
Rclone is interesting, although mergerfs sounds a bit above my skill set.

I was looking into my options with trueNAS and ZFS for expanding storage and it seems there are a few ways I can go about expanding storage of a pool:

1. Increase disk density by swapping in disks of higher capacity, letting the vdev resilver and repeating until I have essentially taken my 3 4tb drives and swapped them out with 8tb drives. Expensive and has a point at which density gets too high and the risk of more than one drive failing before the vdev resilvers is quite high.

2. Add a new 3 disk vdev to the pool. Risky in that if I lose one vdev the whole pool is toast. That means two drive failures in one vdev could kill the whole pool.

3. Copy the entire pool to another site, destroy the vdev and make a new raid z2 with 6 drives. Means any 2 disks can die and still recover the pool. Mad effort. Don't currently have any storage big enough to back up the pool to (dw nothing critical, just media mostly, and important photos are backed up off site on cloud).

4. Trash the whole thing and start over with unraid. Have to relearn everything and set up all the stuff I set up again, but more flexible for adding storage piecemeal to keep capex lower or less chunky anyway (need to learn what limits there are on this adding of disks). Reduced performance vs raidz, but I'm bound by my network upstream anywhere but my house so not really a huge deal.
 
Rclone is interesting, although mergerfs sounds a bit above my skill set.

I was looking into my options with trueNAS and ZFS for expanding storage and it seems there are a few ways I can go about expanding storage of a pool:

1. Increase disk density by swapping in disks of higher capacity, letting the vdev resilver and repeating until I have essentially taken my 3 4tb drives and swapped them out with 8tb drives. Expensive and has a point at which density gets too high and the risk of more than one drive failing before the vdev resilvers is quite high.

2. Add a new 3 disk vdev to the pool. Risky in that if I lose one vdev the whole pool is toast. That means two drive failures in one vdev could kill the whole pool.

3. Copy the entire pool to another site, destroy the vdev and make a new raid z2 with 6 drives. Means any 2 disks can die and still recover the pool. Mad effort. Don't currently have any storage big enough to back up the pool to (dw nothing critical, just media mostly, and important photos are backed up off site on cloud).

4. Trash the whole thing and start over with unraid. Have to relearn everything and set up all the stuff I set up again, but more flexible for adding storage piecemeal to keep capex lower or less chunky anyway (need to learn what limits there are on this adding of disks). Reduced performance vs raidz, but I'm bound by my network upstream anywhere but my house so not really a huge deal.

With UnRAID you can have a single storage pool (uptown 30 devices, so 28 data, 2 parity), multiple cache pools and/or dedicated drives. So for example high IO or dedicated VM's, docker etc. usually point to your cache pool/cache drive, this could be SSD or HDD, or you can pass a dedicated SSD/HDD through. Drive failure wise, even if more drives fail than you have parity drives to compensate, the drives are still individually readable, so data loss would be limited to whatever was on whatever failed above your parity count (standard warning, RAID or unRAID is not a substitute for a backup, but you seemingly already know that). Multiple storage pools is on the roadmap and multiple cache pools is now also a thing. In terms of learning it all again, i'd suggest having a look at the youtube stuff, it's pointless me telling you something is easy, but it's easy :D

In terms of advice I would offer based on where you are now and ignoring the above, you have painted yourself into a proverbial corner. 1-3 are all compromises and some of them genuinely are horrible, all 3 require significant investment in additional drives and all you're doing is extending the period before you are in the same situation again. Right now your exit strategy is a single 8TB+ drive, it's cheap, reasonably quick and simple. If you go 1-3, you are going to need multiple drives and be in exactly the same situation in xx months (if you're lucky), but with a bigger expense and more complicated exit strategy. Again, it appears based on what you describe that you chose a file system that doesn't suit your needs or your pocket, the sooner you actually deal with the issue, the cheaper and less painful it will be.

Good luck!
 
With UnRAID you can have a single storage pool (uptown 30 devices, so 28 data, 2 parity), multiple cache pools and/or dedicated drives. So for example high IO or dedicated VM's, docker etc. usually point to your cache pool/cache drive, this could be SSD or HDD, or you can pass a dedicated SSD/HDD through. Drive failure wise, even if more drives fail than you have parity drives to compensate, the drives are still individually readable, so data loss would be limited to whatever was on whatever failed above your parity count (standard warning, RAID or unRAID is not a substitute for a backup, but you seemingly already know that). Multiple storage pools is on the roadmap and multiple cache pools is now also a thing. In terms of learning it all again, i'd suggest having a look at the youtube stuff, it's pointless me telling you something is easy, but it's easy :D

In terms of advice I would offer based on where you are now and ignoring the above, you have painted yourself into a proverbial corner. 1-3 are all compromises and some of them genuinely are horrible, all 3 require significant investment in additional drives and all you're doing is extending the period before you are in the same situation again. Right now your exit strategy is a single 8TB+ drive, it's cheap, reasonably quick and simple. If you go 1-3, you are going to need multiple drives and be in exactly the same situation in xx months (if you're lucky), but with a bigger expense and more complicated exit strategy. Again, it appears based on what you describe that you chose a file system that doesn't suit your needs or your pocket, the sooner you actually deal with the issue, the cheaper and less painful it will be.

Good luck!
Thanks that sounds like good advice. I'll read up on unraid and will likely take the plunge. As you say, better now than later! Need to learn how parity works in unraid (eg I'm not sure how 2 disks could offer parity to 28 others unless they were massive disks). I have 2.4tb left on my 8tb pool so I think I'll take a little breather to at least "live with" my little home server for a bit before deciding if I want to invest more into it long term in both time and money. It is a lot of fun though!
 
Thanks that sounds like good advice. I'll read up on unraid and will likely take the plunge. As you say, better now than later! Need to learn how parity works in unraid (eg I'm not sure how 2 disks could offer parity to 28 others unless they were massive disks). I have 2.4tb left on my 8tb pool so I think I'll take a little breather to at least "live with" my little home server for a bit before deciding if I want to invest more into it long term in both time and money. It is a lot of fun though!

UnRAID parity is simple. Your array (storage pool) is made up of data drives and optionally parity drives and follows simple rules:
You can have 0, 1 or 2 parity drives in an array, the total number of parity drives is the maximum number of drives you can loose before you begin to loose data.
Parity always uses the largest drive(s).

Conceivably 1 parity drive could protect an infinite number of data drives, it's only if more than one of them fail that you have an issue.

So lets take what you have, 3x4TB drives, if you set up with just those, you could have:
12TB of storage, 0 parity drives.
8TB of storage, 1 parity drive.
4TB of storage, 2 parity drives.

If you add the smaller 1TB drive you mentioned, it would look like this:
13TB of storage, 0 parity drives.
9TB of storage, 1 parity drive.
5TB of storage, 2 parity drives.

If you got a great deal on a 16TB drive it would become:
29TB of storage (3x4, 1x1, 1x16), 0 parity.
13TB of storage (3x4, 1x1), 1x16 parity.
Wouldn't work - requires a second parity drive equal to the size of the largest drive.

In terms of the easiest way out of your current situation, buy a drive larger than the data you have on the ZFS array, basically whatever has the best £/TB, but generally bigger is better, copy and verify all the data to the new drive, break your ZFS volume and add the drives to UnRAID. Personally I would avoid parity at this point, it'll speed up the write process significantly and you have a copy anyway. Once the data is over (and verified), add the new drive to UnRAID as a parity drive and let UnRAID build the parity. You end up with 12TB of space with 1 parity drive, anything the same size or smaller than the parity drive that you add can either be storage or parity.
 
(eg I'm not sure how 2 disks could offer parity to 28 others unless they were massive disks)

A Parity drive in unraid essentially adds together the values of all the other drives, so by working backwards it can figure out what is missing if a drive fails.
The parity drive has to be as large as the largest data drive in the array, as is it needs to calculate checksums for the whole of that drive.

E.g. 2Tb data, 4Tb data, and 4Tb parity drive = 6Tb data

in terms of the data and parity it would simplisiticly look something like this

Code:
2tb drive = 0101
4tb drive = 10111111
4tb parity= 11121111
 
Back
Top Bottom