In terms of the easiest way out of your current situation, buy a drive larger than the data you have on the ZFS array, basically whatever has the best £/TB, but generally bigger is better, copy and verify all the data to the new drive, break your ZFS volume and add the drives to UnRAID. Personally I would avoid parity at this point, it'll speed up the write process significantly and you have a copy anyway. Once the data is over (and verified), add the new drive to UnRAID as a parity drive and let UnRAID build the parity. You end up with 12TB of space with 1 parity drive, anything the same size or smaller than the parity drive that you add can either be storage or parity.
Thanks for the explanation, I did some reading and understand how it essentially sums rows of bits to work out what should be there so that makes sense now. So with the max 2 parity disks you can only have 2 disks fail, and frankly if I find myself storing the kind of data I can't afford to lose on this, then I think I'll have already made a mistake.
Your suggestion above makes good sense and seems like a fairly low effort way to migrate while also allowing myself room to go for higher density drives in the future as well. So if I understand it:
1. get an 8TB diskand copy all the data from my ZFS pool to it
2. destroy ZFS pool
3. Set up non-parity pool on unraid and copy data from the 8TB disk to avoid slowdown from parity writes
4. format 8TB drive and add it to the Unraid pool as a parity disk.
5. ride into the sunset?
This approach is nice since I still get an extra 4TB of usable space from adding a 8TB drive, compared to getting 8TB for the price of 3 4TB drives on ZFS. And then I can just pop in extra disks whenever
So the 1TB HDD I'm currently using as my truenas boot drive is no longer needed as it seems I can (must?) boot unraid from USB? That is not a big deal, but just checking as it seems weird to trust a flash drive with the OS.
How bad an impact does parity writing add in comparison to ZFS? like 10% or 50% perf impact? Presume this is why people use cache pools to mitigate this, but what I read about the cache pool is that it just migrates the data over on a schedule rather than trying to empty itself in realtime, which seems less than ideal. Right now, for instance, I am reencoding the audio on my entire media library from DTS to AAC (thanks Samsung), and I'm actually doing the encoding on my desktop straight from the server. While absolutely inadvisable, it is actually going really well. In an ideal world I'd have used tdarr but there isn't a freeBSD version. I suspect in the future I will just rip my blurays into AAC, but the idea of having a write performance hit while performing a task like that is a bit off-putting.
Last edited: