FreeNAS 10 vs Xpenology

Soldato
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
So I'm considering replacing my DSM 5.2 virtualised build with FreeNAS 10. I'm after a supported system that isn't a 'hack' to make it work and that also has bitrot protection etc. built in to it.

My use cases will be general media storage as I use an alternative system for ESXi cluster.

Xpenology using DSM 5.2 has served me well, especially SHR2 with mixed sized drives... but now I don't need to worry about that, but as the system does look after my family photos I'd rather not let corruption occur in them.

Outside of Copy on Write and the checksumming using RaidZ(x), does FreeNAS have any benefits or disadvantages to Xpenology? Memory isn't issue and neither is vCPUs... i just want a reliable, speedy (WD red speeds e.g. 115MB/s+) storage system for general usage.

Thoughts?
 
Soldato
Joined
5 Nov 2011
Posts
5,356
Location
Derbyshire
I moved to unRAID for my home servers backup server. Really impressed with how clean the interface is and how easy it was to get up and running. You get a 30 day trial to mess about but even then a full license for a 6 drive unit is only £40 (ish). It has a lot of additional features like docker and VM host but as a "base" NAS OS it's really good.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
Thanks guys, I've used unraid in the past along with flexraid. I guess ultimately I'm after a filesystem that can automatically repair damaged files from the parity drives and I know ZFS has this but BTRFS is broken and not advised..... Can Unraid do this with BTRFS without Raid5/6 which it unraid doesn't care about?

Obviously Unraid only utilises a single drive for performance so its limited in comparison to freenas / xpenology but then as I hold my VMs on a SAN, I question do I really need an extensively fast storage array?
 
Associate
Joined
1 Sep 2009
Posts
1,084
Can Unraid do this with BTRFS without Raid5/6 which it unraid doesn't care about?

No. If bitrot and COW are requirements, then ZFS is realistically your only option. Dunno why people are suggesting Unraid, it doesn't have these features or anything like it.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
So I ahve currently decided to use t-raid (flexraid) as my parity protected storage on a windows install. I don't need fast storage for VMs, this is just for general stuff... dual parity with the ability to add additional drives and also pull them when needed. Flexraid is also working on a striped parity system that can do software Raid at all levels including hybrid raid that I tested some time ago in an early beta build and it worked well... Just needed streamlining.

I've added a feature request for check summing and rebuilding from parity; there is already a plugin that can checksum so hopefully the dev can add it in to the system he created.

The whole system will be slower as data is not spanning the array like Raid 5/6 but I prefer windows and my xpenology was messing up vSphere HA on my servers and crashing the host.
 
Associate
Joined
1 Sep 2009
Posts
1,084
So I ahve currently decided to use t-raid (flexraid) as my parity protected storage on a windows install.
I've used this, it's really bad. Twice I lost a disk from an array, and because a tiny metadata file somehow went 'missing' from my parity disk, I lost everything that was on the bad disk. When I logged a support case the developer basically shrugged his shoulders and had no idea.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
El Pew, what filesystem where you using on top of the NZFS disks and where you regularly running verify+ to compare DDU vs PPU? I agree that his attitude is quite often poor and the his responses disappointing, but so far its been ok (and I spent 8 more or so testing 'Standards' in a hybrid raid configuration). The interface is clunky but the tool tips have improved... I'd rather still use xpenology but its crashing the management of the host killing the vSAN and DRS, FreeNAS is something I struggle with as I do not tend to buy lots of drives at once and unraid, while similar to traid, doesn't do windows ACLs in a domain as well.
 
Associate
Joined
1 Sep 2009
Posts
1,084
El Pew, what filesystem where you using on top of the NZFS disks and where you regularly running verify+ to compare DDU vs PPU?
NTFS, with 3 DRUs and 1 PPU on top of T-Raid, I'm not touching "Not ZFS" with a bargepole. I was running update and verify tasks every day at midnight. My theory is that I lost a drive without noticing for a day or two, but the verify tried to go through anyway and broke the parity.

After the second time this happened I switched to ZFS on Solaris.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
I'm using ReFS v3 from Server 2016 essentials so will see how I get on. To be honest the performance is low, but for general storage speeds will be fine. I like the idea of ZFS, I really do, except I don't like the fact that to upgrade to larger drives will require purchasing enough replacements for the vdev to grow it bigger; costly for a home system. My preference is the MDADM route, e.g. Synology Hybrid Raid, or unraid/flexraid for the flexibility of additional drives without too much trouble.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
So to give context the system that the storage system runs on is a Xeon L5630 2.13ghz 4c/8t cpu, 32GB DDR3 Rdimm, 2 x LSI 9211-8i HBAs (IT mode), Supermicro 16 bay 3u chassis. It runs esxi 6.5 which is part of a cluster of 3 other machines. I was looking for local storage that would store my general files but not VMs, these run off local storage on the hosts themselves.

I tried FreeNAS Corral, Unraid, Flexraid (t-raid and the alpha version of standards) and Synology DSM 5.2 & 6.1 via the xpenology boot loaders.

FreeNAS Corral looked so much better than 9.xx and I really liked ZFS and the self healing of files, unfortunately I could not get my head around the concept of replacing a full vdev to increase storage capacity, e.g. to increase the 3 disk RaidZ1 vdev from 3tb to 4tb to gain an extra 2TB in NZ is around $700 NZD at the moment so not cost effective.

UNraid is always useful but performance is lacking and always found Active Directory membership cumbersome and clunky.

Flexraids applications where native windows so big plus for integration in to my home cluster but meh performance was lacking, interfaces where still poor and haven't improved and I still don't trust it.

Synology DSM - Hybrid Raid so easy to expand, great interface, easy to use, NOT native to PC and so hacked to worked, supports all network share types needed out of the box. Does not have bitrot protection although BTRFS is moving in the right direction.

As of yesterday I migrated all the files back in to DSM 6.1 using the latest loader and it works so well. Annoying that it doesn't have bitrot protection yet, and I'm not sure how synology will achieve that as they use BTRFS only as the filesystem and not as the replacement to MDADM for raid array creation so it cannot checksum at the drive level yet (not that I would trust BTRFS in Raid5 or 6 yet anyway!).

So back to almost square one, except this has given me the nudge to move from 5.2 to the latest version.
 
Associate
Joined
31 May 2004
Posts
1,765
Location
The 'Toon, UK, in Europe
@BlizzardX your experience echoes mine pretty much... I've been tinkering with moving to Corral/Unraid from DSM, and everything keeps bringing me back to DSM without exception.

Corral seems to be about 10% down on transfer performance for me, vs DSM, and UNraid a little less. Identical hardware I've been testing on too.
 
Associate
Joined
1 Sep 2009
Posts
1,084
I'm using ReFS v3 from Server 2016 essentials so will see how I get on. To be honest the performance is low, but for general storage speeds will be fine. I like the idea of ZFS, I really do, except I don't like the fact that to upgrade to larger drives will require purchasing enough replacements for the vdev to grow it bigger; costly for a home system. My preference is the MDADM route, e.g. Synology Hybrid Raid, or unraid/flexraid for the flexibility of additional drives without too much trouble.
ZFS is not designed for home use, which is why it doesn't really work. Growing arrays isn't something you really do in a corporate environment, you would have multiple servers and just add an entire new vdev or even a new server if you needed more capacity. I always wince a little when people talk about growing a ZFS array by replacing disks individually then resilvering. With large arrays you'll be stressing the balls off your disks for days right at the point where you've reduced your parity protection by yanking a disk, and with HDDs hitting 10TB soon the risk of losing a second disk during the resilver becomes much greater.
 
Soldato
OP
Joined
19 Oct 2002
Posts
2,694
Location
Auckland, New Zealand
This was one of the reasons why I didn't approach ZFS in the end. The whole concept is fine from a business approach where you have the money to buy the right capacity and setup as many vdevs as you want... the linux raid approach using MDADM is just simpler and with DSMs ability to create partitions to get a hybrid raid I find that much better for general use; I just wish that checksumming was built in to these other file systems and so was the automatic rebuild from parity.
 
Man of Honour
Joined
20 Sep 2006
Posts
33,883
Bump for this. I currently run a QNAP TVS 671 with 16GB of RAM and I've upgraded the CPU to an i7-4790S. I have two ~500GB SSDs doing read/write cache and 4x6TB WD reds.

I am finding that when NZBGet is downloading and unar'ing stuff, Plex playback begins to suffer. I have just upgraded my PC from a 5960X to a 8086K and I plan on building a new micro server using the 5960X.

What would people recommend? freeNAS, Nas4free, something else? I'm a pretty advanced user so CLI doesn't bother me.
 
Back
Top Bottom