Bit by the home server bug

In terms of the easiest way out of your current situation, buy a drive larger than the data you have on the ZFS array, basically whatever has the best £/TB, but generally bigger is better, copy and verify all the data to the new drive, break your ZFS volume and add the drives to UnRAID. Personally I would avoid parity at this point, it'll speed up the write process significantly and you have a copy anyway. Once the data is over (and verified), add the new drive to UnRAID as a parity drive and let UnRAID build the parity. You end up with 12TB of space with 1 parity drive, anything the same size or smaller than the parity drive that you add can either be storage or parity.

Thanks for the explanation, I did some reading and understand how it essentially sums rows of bits to work out what should be there so that makes sense now. So with the max 2 parity disks you can only have 2 disks fail, and frankly if I find myself storing the kind of data I can't afford to lose on this, then I think I'll have already made a mistake.

Your suggestion above makes good sense and seems like a fairly low effort way to migrate while also allowing myself room to go for higher density drives in the future as well. So if I understand it:

1. get an 8TB diskand copy all the data from my ZFS pool to it
2. destroy ZFS pool
3. Set up non-parity pool on unraid and copy data from the 8TB disk to avoid slowdown from parity writes
4. format 8TB drive and add it to the Unraid pool as a parity disk.
5. ride into the sunset?

This approach is nice since I still get an extra 4TB of usable space from adding a 8TB drive, compared to getting 8TB for the price of 3 4TB drives on ZFS. And then I can just pop in extra disks whenever

So the 1TB HDD I'm currently using as my truenas boot drive is no longer needed as it seems I can (must?) boot unraid from USB? That is not a big deal, but just checking as it seems weird to trust a flash drive with the OS.

How bad an impact does parity writing add in comparison to ZFS? like 10% or 50% perf impact? Presume this is why people use cache pools to mitigate this, but what I read about the cache pool is that it just migrates the data over on a schedule rather than trying to empty itself in realtime, which seems less than ideal. Right now, for instance, I am reencoding the audio on my entire media library from DTS to AAC (thanks Samsung), and I'm actually doing the encoding on my desktop straight from the server. While absolutely inadvisable, it is actually going really well. In an ideal world I'd have used tdarr but there isn't a freeBSD version. I suspect in the future I will just rip my blurays into AAC, but the idea of having a write performance hit while performing a task like that is a bit off-putting.
 
Last edited:
Thanks for the explanation, I did some reading and understand how it essentially sums rows of bits to work out what should be there so that makes sense now. So with the max 2 parity disks you can only have 2 disks fail, and frankly if I find myself storing the kind of data I can't afford to lose on this, then I think I'll have already made a mistake.

Your suggestion above makes good sense and seems like a fairly low effort way to migrate while also allowing myself room to go for higher density drives in the future as well. So if I understand it:

1. get an 8TB diskand copy all the data from my ZFS pool to it
2. destroy ZFS pool
3. Set up non-parity pool on unraid and copy data from the 8TB disk to avoid slowdown from parity writes
4. format 8TB drive and add it to the Unraid pool as a parity disk.
5. ride into the sunset?

This approach is nice since I still get an extra 4TB of usable space from adding a 8TB drive, compared to getting 8TB for the price of 3 4TB drives on ZFS. And then I can just pop in extra disks whenever

So the 1TB HDD I'm currently using as my truenas boot drive is no longer needed as it seems I can (must?) boot unraid from USB? That is not a big deal, but just checking as it seems weird to trust a flash drive with the OS.

How bad an impact does parity writing add in comparison to ZFS? like 10% or 50% perf impact? Presume this is why people use cache pools to mitigate this, but what I read about the cache pool is that it just migrates the data over on a schedule rather than trying to empty itself in realtime, which seems less than ideal. Right now, for instance, I am reencoding the audio on my entire media library from DTS to AAC (thanks Samsung), and I'm actually doing the encoding on my desktop straight from the server. While absolutely inadvisable, it is actually going really well. In an ideal world I'd have used tdarr but there isn't a freeBSD version. I suspect in the future I will just rip my blurays into AAC, but the idea of having a write performance hit while performing a task like that is a bit off-putting.

You don't need to format the drives - unraid will do that, yes USB boot, I ran without failure for something like a decade? In terms of encoding your audio, why not just use Plex/Emby/JellyFin and do it in real time when required? Audio transcodes use next to nothing in terms of CPU. Writes are likely to be 50-60MB/s, but you can do various things to improve/work round that.
 
You don't need to format the drives - unraid will do that, yes USB boot, I ran without failure for something like a decade? In terms of encoding your audio, why not just use Plex/Emby/JellyFin and do it in real time when required? Audio transcodes use next to nothing in terms of CPU. Writes are likely to be 50-60MB/s, but you can do various things to improve/work round that.
It's an issue that affects plex specifically when transcoding audio to AAC from DTS while direct playing video (eg when playing to my Samsung TV which can't handle DTS). So a rather specific bug, but one affecting anyone with a Samsung TV running plex MS after version 5713 apparently. I just figured I would use ffmpg to batch encode the audio rather than hang about waiting for a fix. The audio works fine if I transcode the whole file but 4k stuff is too much for my old i3. 1080p is just about okay but only barely. Now it all works fine on direct play. Reckon I can live with 50-60MB/s as my gigabit switch basically caps me at around 120MB/s now anyway. I mean 50% is a bit rubbish but the benefits outweigh the costs here as its mostly read not write.
 
It's an issue that affects plex specifically when transcoding audio to AAC from DTS while direct playing video (eg when playing to my Samsung TV which can't handle DTS). So a rather specific bug, but one affecting anyone with a Samsung TV running plex MS after version 5713 apparently. I just figured I would use ffmpg to batch encode the audio rather than hang about waiting for a fix. The audio works fine if I transcode the whole file but 4k stuff is too much for my old i3. 1080p is just about okay but only barely. Now it all works fine on direct play. Reckon I can live with 50-60MB/s as my gigabit switch basically caps me at around 120MB/s now anyway. I mean 50% is a bit rubbish but the benefits outweigh the costs here as its mostly read not write.

Smart TV’s tend to be pretty crap clients tbh, a Shield is arguably the best money I have ever spent on a client, but a modern FTV/Roku isn’t bad. Unfortunately buying a high end TV doesn’t guarantee decent codec support, buying a ‘cheap’ Hisense does.
 
Back
Top Bottom