ZFS server

Associate
Joined
18 Oct 2002
Posts
322
Location
North London, UK
I'm trying to figure out the best way to configure the drives in a diy ZFS server.

Its for large CAD & graphics files and searching fairly large photos and PDF's, for upto 10 users.

The server:
Supermicro X8SIL-F:, Xeon X3450, 32Gb ECC RDIMM
The case can fit 9 standard 3.5" HDD

RAID controller is a IBM M1015 flashed to be an LSI 9240-8i in HBA mode. It has 8 ports which I have passed through ESXi 5 to OI+Napp-it ZFS RAID10 as an NFS datastore.

I'm not sure if I can use the mainboard sata ports as datastores accessible from OI/ZFS for cache drives, I need to look into this.

So my intended setup is like this:
DATA: 6x 3TB WD Reds (RAID10 for performance and upgrades)
L2ARC: 1x 120GB Vertex 3 SSD
ZIL: RAID1 2x 60GB Vertex 3 SSD

A. but this is 9 drives and I only have 8 ports on my IBM M1015.
- Can I use the mainboard's SATA ports for ZIL?
- or, do I lose the redundancy of the ZIL
- or partition the L2ARC to use half as redundancy for the ZIL?
- or, maybe just get another IBM M1015.

Other drives:
ESXi 5: 1x USB stick
OS's (such as Windows SBS 2011 standard & WHS): 2x 500GB WD Blue RAID1
ISO's etc: 1x 250Gb Seagate

B. Would I see noticeable improvement in performance swapping the 2x 500GB for something else?
- Should I (and can I) use the ZFS datastore for the OS's?
- or maybe use 2x 240Gb SSD's? Probably looking at another £400 to do this, so I would rather not if unnecessary
 
Last edited:
- You can use your mainboard SATA ports for ZIL but ideally you want to get another cheap raid card (SIL3114) for the data store so then you can pass it through entirely.

You don't use 'RAID10' in ZFS, you use RAIDzX i.e. RAIDZ1 will parity one drive, RAIDZ2 will partiy two drives. For this much data I recommend RAIDZ2.

- You can't use the ZFS data store for the OS's unless you present the storage back to ESXi using iSCSI or something - added complexity.

You don't need SSD's for data stores.
 
- You can use your mainboard SATA ports for ZIL but ideally you want to get another cheap raid card (SIL3114) for the data store so then you can pass it through entirely.

You don't use 'RAID10' in ZFS, you use RAIDzX i.e. RAIDZ1 will parity one drive, RAIDZ2 will partiy two drives. For this much data I recommend RAIDZ2.

- You can't use the ZFS data store for the OS's unless you present the storage back to ESXi using iSCSI or something - added complexity.

You don't need SSD's for data stores.

Thanks dLockers,
Will try and get the SIL3114, a quick google shows others have successfully passed it through.

In another server (after following a couple of guides) I have presented the ZFS store back to ESXi as NFS using a virtual 10giga NIC and it appears as single large datastore.

To create RAID10 in ZFS, I mirrored 3 no. pairs of 2TB drives and then striped the pairs to make a 6TB store. Apparently, RAIDZ2 has slightly higher redundancy than 2x mirror, but has lower performance, RaidZ is also very slow for resilvering following a drive failure, or storage upgrade. basically, as I understand it, I have traded performance for drive space.

I could install Open Indiana on RAID1 HDD's and then the other OS's on the NFS datastore?

What I was wondering is, will the server OS's perform noticably better on SSD's? compared with HDD's or the ZFS datastore.
 
I wouldn't pass the SIL3114 if you have a decent mobo, i'd pass the onboard one.

To be honest I doubt the performance of SSD data stores would be noticable. It's rarely used for read/write operations apart from copying.

Might be better to ask an expert on Hard forum (the napp-it creator lives there).
 
Thinking about it, if I am to get another controller, I'd probably just get another IBM M1015 for convenience, as I know how to get it working.

For the ZIL, I have 2 options:
2x 60Gb Vertex 3 MLC in RAID1, which requires the IBM M1015
1x 24Gb Intel 313 SLC NAND (20GB not in stock anywhere), no additional controller required

I'll not get SSD's for the VM's then. I'll try to use the ZFS datastore for the VM's except for the Open Indiana VM, which will be on 2x Vertex 3 60Gb SSD in RAID1 (spare). Will stick a spare 3.5" HDD in the final space for ISO's etc.
 
Definitely get an SLC drive for the ZIL.

You could try running the system without ZIL initially to see if you actually need one?

Having a separate ZIL massively speeds up performance in our experience.

Put in as much RAM as you can afford as well. One thing to bear in mind with the cache devices is that the system will use 1-2% of RAM to 'map' the L2ARC.
 
For drives in ZIL you need RAID. Otherwise your array can easily break.
I was hoping a more reliable SLC with a UPS would be sufficient, I'm not sure what the damage would be if the writing of the files didn't complete. During saving files, backing up, OS activity...
You could try running the system without ZIL initially to see if you actually need one?
It shouldn't affect the rest of the server build, I can just upgrade a week later, worth trying without first.

Definitely get an SLC drive for the ZIL.
price starts to creep up
1x SLC is about £100
2 x MLC with controller is about £180 as I already have a 60Gb
2x SLC with controller is about £350
whats the advantage of SLC raid1 vs MLC raid1?

Having a separate ZIL massively speeds up performance in our experience.
I read somewhere the capacity should be 10x the MB/sec, so I assume 24GB would be sufficient?

Put in as much RAM as you can afford as well. One thing to bear in mind with the cache devices is that the system will use 1-2% of RAM to 'map' the L2ARC.
I have maxed the motherboard at 32Gb, I need to balance this with the other VM's. Will try 19Gb with 6x 3TB to start with.


thanks for your help guys!
 
Back
Top Bottom