ZFS Newbie - Array Expansion?

Soldato
Joined
24 Apr 2006
Posts
6,413
Location
SE England
I plan to use FreeNAS and plan to purchase a 16 bay server case.

My issue is of expansion. It is too much of an outlay in terms of cost to purchase 16 drives in one go. So, I wish to expand the ZFS array over time.

I understand that you cannot add single drives to a ZFS pool, you would need to destroy the pool and then recreate the pool with the newly installed disks. This is very annoying and such a limitation. I understand that the file format was not really intended to be used in a home situation, but you'd of thought that it would have been adapted to include this feature by now. :(

I've read that it is possible to add new disks and expand the pool by adding the same amount of disks to the pool, by building a new RAID set, I believe? I'm having a really hard time tracking down information on exactly what to do and how to do it, which I find a bit crazy. Has anyone got any experience or can you point me to a guide? I need to find out whether this is possible and plan this properly before I commit money in to the hardware required.

Thanks for any help
 
Do you have to use freenas? Is this going to be a production kind environment? If it's for some kind of madly over specced home server you could use xpenology as SHR allows you to stick new drives in and add them in at any time.

I'm quite surprised zfs doesn't allow you to do similar if I'm honest.


Basically i want some huge storage with some kind of redundancy. I don't mind too much about which OS. I looked in to SHR, it doesn't offer the same level of protection against data corruption than ZFS offers.

I've been bouncing around all kinds of file systems and software RAID solutions. Flexraid is quite tempting as it can run on top of windows and is very flexible, slow writes though.

A zfs "device" is called a pool and a pool is made up of vdevs.


For example you could start with 3 drives in a raidz1 configuration.

We'll assume 2tb drives so this gives 4tb with a single drive parity.

Once this starts to fill up, ideally around 75% or less although it's not too important you could then add another raidz vdev to the existing pool

e: my personal setup currently comprises of a pool which contains a 3x2tb raidz1 array and a 2x1tb mirror. Gives me 5tb usable space and offers the redundancy of 1 drive failure per vdev.

I see, that's quite helpful, thanks.

So.. Could I start with 4x3TB drives and place them in a RAIDZ2. Then at a later date add another 4x3TB drives in to another vdev and set them to RAIDZ2? And do this another two times, eventually filling the 16bay case? Having a total of four vdevs all configured as RAIDZ2?

How is the data treated and presented when you have multiple vdevs? Do they all get pooled together? For instance with a 'Music' folder share, would the contents of that folder spread over to all the multiple vdevs/drives?
 
Last edited:
This all sounds good. I don't mind buying four drives at a time if they can all be configured to use RAIDZ2.

Are there any disadvantages to doing this? Having so many vdevs?
 
Just a quick one you could use a mirror instead of a RAIDZ2 which I think is slightly better on resources


None what so ever that I'm aware of.

However there are some massive performance benefits. The more vdevs your data is stored across the better the access times and read speeds.

The only final thing that hasn't been touched upon is RAM.

ZFS can be ram intensive and the the rule of thumb is 1GB or RAM per 1TB of disc.


There's also some benchmark figures for a 6 drive pool configure as 3 vdevs configured as 2 drives mirrored each

http://forums.overclockers.co.uk/showthread.php?t=18625373&highlight=startername_shad

With mirroring, I thought that would just mirror the data on the additional drives that I add, and not have include of the redundancy or drive space efficiency of RAIDZ2? Maybe I'm confused here.

I didn't think that I'd need that much RAM, I thought that 16GB of RAM would kinda cover it all :D. Hmm... this adds a new spin on things as I would be planning on eventually having 48TB of raw storage.
 
Last edited:
Back
Top Bottom