ZFS? Large storage solution?

Soldato
Joined
18 Feb 2006
Posts
9,581
A friend has moved his home server to Vail due to planning on using the pooled storage solution. He set it up and really likes it but has now discovered that it only supports 8GB RAM and a single processor. He has 16GB RAM and dual processors...

Am I right in thinking ZFS pools all HDD into a single usable drive? If RAIDZ is used to provide redundancy (single parity), can you mix the sizes of HDD? I.e. like a Drobo and when you want to increase the size of the pool, pull out a lower capacity drive and replace it with a larger capacity one?

Assuming I've got the right end of the stick on ZFS, which I'm not sure I have. Is the best implementation of ZFS Opensolaris? Other than storage he hosts a couple of VM on the server (VMware) which will need to moved from his current windows setup.

Thanks.
 
Associate
Joined
14 Apr 2008
Posts
1,230
Location
Manchester
Yes to all of those, however, the OpenSolaris project is now dead so you may want to look elsewhere for an implementation.

I have an old Solaris 10 installation at home for my NAS and at work the OpenSolaris stable release.

At the moment i still don't know what to do about either of them, obviously the Sun route is unsupportable and the rest you start running the risk of it being unstable.
 
Associate
Joined
3 Mar 2010
Posts
1,185
Location
Surrey
Yes, ZFS with FreeBSD is epic.

Aslongs as you have 4gb of ram or more, it eats pretty much what you give it.

If you mix hdd sizes it'll obviously use the size of the smallest disk ( for all of them in the array ) so say you have 4x 500gb and 1x 250gb it'll treat *all* the disks as 250gb.

Replacing disks with ZFS is still a work in progress, so really the best way to currently do it without comprising the array is to backup the data and re-create the array.
 
Soldato
OP
Joined
18 Feb 2006
Posts
9,581
Yes, ZFS with FreeBSD is epic.

Aslongs as you have 4gb of ram or more, it eats pretty much what you give it.

If you mix hdd sizes it'll obviously use the size of the smallest disk ( for all of them in the array ) so say you have 4x 500gb and 1x 250gb it'll treat *all* the disks as 250gb.

Replacing disks with ZFS is still a work in progress, so really the best way to currently do it without comprising the array is to backup the data and re-create the array.

Ah, so you can't really mix and and match HDD like a Drobo can? Where you get all the usable space. Whether using ZRAID or not. I.e. 2TB +2TB +2TB +1TB +1TB = 8TB or 6TB with parity.

He currently has 2 copies of all data but wants a single accessible 'drive' with all the data on instead of having for example 'videos 1' and 'videos 2' spread over 2 2TB disks. If that makes sense?
 
Associate
Joined
17 Sep 2008
Posts
1,729
Ah, so you can't really mix and and match HDD like a Drobo can? Where you get all the usable space. Whether using ZRAID or not. I.e. 2TB +2TB +2TB +1TB +1TB = 8TB or 6TB with parity.
Apparently it's doable (the example is based on FreeNAS, but I guess it would work with FreeBSD or Solaris), but obviously it helps if you're a fully qualified geek. :D

It sounds like Vail fits the bill for what your friend wants, is there any reason he needs dual processor/16GB performance for a home server?
 
Soldato
OP
Joined
18 Feb 2006
Posts
9,581
Looks complicated. :p

Vail was perfect except for the CPU and memory limitations. The memory is there for running virtual machines.

To be honest I can see him getting a Drobo before the end of the year.
 
Associate
Joined
27 Oct 2002
Posts
897
I found freenas ZFS implementation to be quite flakey - I got my pool all set up and everything ran find for a few days (testing) then it completely lost the config and couldn't reimport the pool.

The down side of using freenas or freebsd is that they are many many versions behind the latest and greatest in open solaris.

I am currently running b134 opensolaris and it contains lots of features and bug fixes that I think make it worth going with the dev build. I've been running a 4tb pool for 2 months now and it's been perfectly reliable.

ZFS is an amazing filesystem - if it wasn't for the messing about with licences and shutting down the open solaris project it would be perfect. My current plan is to run b134 of open solaris until Solaris 11 is released - it should then have a new enough version of ZFS to import my current filesystem.

Regarding using disks of different sizes in one pool - there are various ways of doing it that give you various benefits. The link above gives you 100% of the space but at the cost of it being a nightmare (even for massive geeks) to manage/upgrade/recover. The solution I've gone with is to have two raidz groups within one pool, as follows:

tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c4t9d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t8d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c4t12d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t11d0 ONLINE 0 0 0
c4t10d0 ONLINE 0 0 0

The original setup was 4 * 300gb disks in raidz1-0 (first single parity group) and 4 * 1tb disks in raidz1-1 (second single parity group). This gave ((4-1)*300) + ((4-1)*1000) = (900 + 3000) = 3900 usable space (roughly, you don't lose exactly 1 disk but it's close enough.)

I've now started replacing the 300gb disks (mainly because they are old and loud) with 1tb disks - when all 4 are replaced the first raidz will be able to use the full size and the pool will become around 6tb usable.

I've got two choices for future upgrades from here - either replace 4 1tb disks with 2tb+ disks or adding additional raid1/mirrors to the pool. The only thing you have to be aware of before you start is that you cannot, no matter what you do, remove groups from the pool - if you accidentally add a single disk as another top level group you are buggered: the only way out is to trash the array. This is the one downside at the moment with zfs and it's unlikely to be fixed any time soon because enterprise customers don't care about this limitation - they tend to add dozens of disks at a time, they don't need to remove a disk to put another in, they just add a bunch more.

Compared to the hardware raid I had before I love zfs - it's incredibly fast both reading and writing (280mb/s average copying from one filesystem in the pool to another and only limited by gigabit network when reading/writing across the network) and replacing failed disks and rebuilding the array is trivially easy (like drobo really). I haven't got hotswap bays so I need to shut down to swap disks but if you did it would work 90% the same as drobo - pull out smallest or failed disk, put the new one in, find out what vdev the new drive has been allocated then the zpool replace <pool> <old vdev> <new vdev> and off it will go rebuilding the array. It is also much quicker rebuilding than my hardware array ever was. It took about 2 hours to resilver 940gb when I replaced one of the 1tb drives.

By comparrison drobo is expensive, slow and worst of all totally inscrutable. There have been numerous reports of drobos just losing all the data one day with no apparent cause/reason - if this happens you've got no way to recover at all. One of the handy things with ZFS is all the config for the pool is contained on the disks in the pool - if my machine dies all I need to do is plug the drives into another system, boot a live cd with a new enough version of zfs and do zpool import and it should pick it all up - even if some of the disks are missing it is possible to import as long as enough parity data exists.

What oracle have done to open solaris is very annoying but the latest dev build 134 seems very stable and I don't see any reason for running away from it just because it's the last of it's kind.


Mmmmmm, rambling...
 
Back
Top Bottom