VM storage performance? Perfmon query.

Double parity RAID is RAID-Z2, essentially RAID6.. I take it you're using the web interface, im not to familiar with it. You should be able to remove the array and create new ones by selecting disks and then what type of array you like..obviously destroying the data in the process!
 
Raid Z2 has a big performance overhead for writing, especially with lots of disks. As opposed to mirroring, which gets better the more disks you have (as far as I'm aware). More RAM is also good for Solaris/ZFS for the L1 ARC. :)
 
Yes - if you have 48 500GB drives in a RAID10 setup (which is equivalent to ZFS mirroring) you'll have half the available space of the raw disks.
 
Boss wont like that.

He bought this because of the £/MB. The intial plan I had was to get another SAS FC array for the Virtual Machine clustering, but for the price of 15TB, he managed to get 48TB of Sun Storage.

If it doesnt work properly then so be it...
 
Random aside, I'm using a Solaris host with ZFS as an NFS store for my 3-host VMware environment. The Solaris box currently has 4GB RAM and is hosting 4 450GB 15k SAS drives mirrored. People say that "more memory is better" - how much performance change am I likely to see if I double it to 8GB?

I know this question varies hugely depending on workload, but I'm trying to get a feel for whether it'd be "only a little bit" or "a huuuuuuuuuuuuuge amount". :)
 
. People say that "more memory is better" - how much performance change am I likely to see if I double it to 8GB?

ZFS uses RAM like a cache for the file system. The more RAM you have, the more of your data can exist in the RAM. The data held in RAM is not "dirty" (unlike the ZIL) so it can be flushed at any time without worrying about the integrity of the data.

Only in very few circumstances will more RAM not help ZFS, in my experience. If you ever read any of the lead Solaris/ZFS programmers' blogs, you'll see all their test systems have 32GB RAM+.
 
Oh and just noticed you're using NFS with ZFS - how are you finding that? You might find having a very fast SSD or NV RAM for your ZFS Intent Log (ZIL) improves performance significantly. http://blogs.sun.com/brendan/
 
Last edited:
I've not really looked into it but there's some DTrace scripts you can use to show how much ZIL, ARC and L2ARC are being hit which may give you a clue as to any bottlenecks.. prepare for lots of reading though :p
 
Oh and just noticed you're using NFS with ZFS - how are you finding that? You might find having a very fast SSD or NV RAM for your ZFS Intent Log (ZIL) improves performance significantly. http://blogs.sun.com/brendan/

Seems to work pretty well. My main bottleneck is actually memory on the vsphere hosts, but I'm sorting that. There's a little tweak you have to do to the nfs export to make it play nice with VMware, but that's fairly simple. Seems to work well. Going to set up auto snapshots next.

SSDs would be great, I'll have to see if IBM make a part for a bladecenter S and how much it'd cost.

I've not really looked into it but there's some DTrace scripts you can use to show how much ZIL, ARC and L2ARC are being hit which may give you a clue as to any bottlenecks.. prepare for lots of reading though :p

One thing I'm severely missing is any way of doing decent metrics on ZFS. Any links? Or is it just a case of googling like mad?
 
Last edited:
Random aside, I'm using a Solaris host with ZFS as an NFS store for my 3-host VMware environment. The Solaris box currently has 4GB RAM and is hosting 4 450GB 15k SAS drives mirrored. People say that "more memory is better" - how much performance change am I likely to see if I double it to 8GB?

I know this question varies hugely depending on workload, but I'm trying to get a feel for whether it'd be "only a little bit" or "a huuuuuuuuuuuuuge amount". :)

Your setup is very similar to my home lab :) except I have 8Gb RAM and 4 x 500Gb SATA drives.

I've found the arc_summary.pl script is handy for gauging the ZFS cache performance.

As it's a lab setup, I also run some VirtualBox VM's on the server so I needed to limit the arc cache size with an /etc/system setting for zfs_arc_max. Without the limit it would use around 6.5Gb of memory and the Cache Hit ratio as reported by the script above would be around 96% (that's for all IO not just vmdk's). After it was capped to 2.5Gb to make space for Vbox VM's, the hit rate fell to 92%.

Although the difference is not great, memory is so cheap it's worth trying.

-Pete
 
BTW, here's the output from arc_summary.pl this morning.

Note, I'd just fired up a couple of VBox VM's so the cache hit ratio is a bit down on the normal number when it's acting as a file and mail server.

-Pete

Code:
# arc_summary.pl 
System Memory:
         Physical RAM:  8181 MB
         Free Memory :  3631 MB
         LotsFree:      126 MB

ZFS Tunables (/etc/system):
         set zfs:zfs_arc_max = 0xa0000000

ARC Size:
         Current Size:             856 MB (arcsize)
         Target Size (Adaptive):   1019 MB (c)
         Min Size (Hard Limit):    320 MB (zfs_arc_min)
         Max Size (Hard Limit):    2560 MB (zfs_arc_max)

ARC Size Breakdown:
         Most Recently Used Cache Size:          100%   1019 MB (p)
         Most Frequently Used Cache Size:         0%    0 MB (c-p)

ARC Efficency:
         Cache Access Total:             13700143
         Cache Hit Ratio:      90%       12398481       [Defined State for buffer]
         Cache Miss Ratio:      9%       1301662        [Undefined State for Buffer]
         REAL Hit Ratio:       79%       10849386       [MRU/MFU Hits Only]

         Data Demand   Efficiency:    92%
         Data Prefetch Efficiency:    60%

        CACHE HITS BY CACHE LIST:
          Anon:                        9%        1130894                [ New Customer, First Cache Hit ]
          Most Recently Used:         19%        2365999 (mru)          [ Return Customer ]
          Most Frequently Used:       68%        8483387 (mfu)          [ Frequent Customer ]
          Most Recently Used Ghost:    1%        140769 (mru_ghost)     [ Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  2%        277432 (mfu_ghost)     [ Frequent Customer Evicted, Now Back ]
        CACHE HITS BY DATA TYPE:
          Demand Data:                38%        4726855 
          Prefetch Data:               7%        900967 
          Demand Metadata:            48%        5951340 
          Prefetch Metadata:           6%        819319 
        CACHE MISSES BY DATA TYPE:
          Demand Data:                30%        390600 
          Prefetch Data:              45%        597640 
          Demand Metadata:            22%        289782 
          Prefetch Metadata:           1%        23640 
---------------------------------------------
 
Looks like a useful script. Going to have to spend some time figuring out exactly what everything means though :)

Code:
bash-3.2# ./arc_summary.pl
System Memory:
         Physical RAM:  4087 MB
         Free Memory :  111 MB
         LotsFree:      63 MB

ZFS Tunables (/etc/system):

ARC Size:
         Current Size:             1142 MB (arcsize)
         Target Size (Adaptive):   1142 MB (c)
         Min Size (Hard Limit):    383 MB (zfs_arc_min)
         Max Size (Hard Limit):    3065 MB (zfs_arc_max)

ARC Size Breakdown:
         Most Recently Used Cache Size:          46%    528 MB (p)
         Most Frequently Used Cache Size:        53%    613 MB (c-p)

ARC Efficency:
         Cache Access Total:             538786342
         Cache Hit Ratio:      90%       487493479      [Defined State for buffer]
         Cache Miss Ratio:      9%       51292863       [Undefined State for Buffer]
         REAL Hit Ratio:       70%       377551359      [MRU/MFU Hits Only]

         Data Demand   Efficiency:    93%
         Data Prefetch Efficiency:    81%

        CACHE HITS BY CACHE LIST:
          Anon:                       20%        97662945               [ New Customer, First Cache Hit ]
          Most Recently Used:         13%        66099474 (mru)         [ Return Customer ]
          Most Frequently Used:       63%        311451885 (mfu)        [ Frequent Customer ]
          Most Recently Used Ghost:    0%        4700961 (mru_ghost)    [ Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  1%        7578214 (mfu_ghost)    [ Frequent Customer Evicted, Now Back ]
        CACHE HITS BY DATA TYPE:
          Demand Data:                58%        285362436
          Prefetch Data:              27%        135126281
          Demand Metadata:            13%        66935439
          Prefetch Metadata:           0%        69323
        CACHE MISSES BY DATA TYPE:
          Demand Data:                37%        19445097
          Prefetch Data:              60%        30946874
          Demand Metadata:             1%        884392
          Prefetch Metadata:           0%        16500
---------------------------------------------
 
Just for what its worth here's a arc_summary on a test box with just 2GB RAM. Box purely serves up iSCSI targets for ESX and SMB shares

Code:
# ./arc_summary.pl
System Memory:
         Physical RAM:  1783 MB
         Free Memory :  208 MB
         LotsFree:      27 MB

ZFS Tunables (/etc/system):

ARC Size:
         Current Size:             897 MB (arcsize)
         Target Size (Adaptive):   897 MB (c)
         Min Size (Hard Limit):    167 MB (zfs_arc_min)
         Max Size (Hard Limit):    1337 MB (zfs_arc_max)

ARC Size Breakdown:
         Most Recently Used Cache Size:          70%    635 MB (p)
         Most Frequently Used Cache Size:        29%    262 MB (c-p)

ARC Efficency:
         Cache Access Total:             108669228
         Cache Hit Ratio:      98%       107071361      [Defined State for buffer]
         Cache Miss Ratio:      1%       1597867        [Undefined State for Buffer]
         REAL Hit Ratio:       97%       106199618      [MRU/MFU Hits Only]

         Data Demand   Efficiency:    93%
         Data Prefetch Efficiency:    50%

        CACHE HITS BY CACHE LIST:
          Anon:                        0%        409173                 [ New Customer, First Cache Hit ]
          Most Recently Used:          3%        3935568 (mru)          [ Return Customer ]
          Most Frequently Used:       95%        102264050 (mfu)        [ Frequent Customer ]
          Most Recently Used Ghost:    0%        165328 (mru_ghost)     [ Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  0%        297242 (mfu_ghost)     [ Frequent Customer Evicted, Now Back ]
        CACHE HITS BY DATA TYPE:
          Demand Data:                10%        11552171
          Prefetch Data:               0%        739510
          Demand Metadata:            86%        92360119
          Prefetch Metadata:           2%        2419561
        CACHE MISSES BY DATA TYPE:
          Demand Data:                50%        803076
          Prefetch Data:              45%        727669
          Demand Metadata:             3%        51573
          Prefetch Metadata:           0%        15549
---------------------------------------------
 
Good to see so many people are using ZFS for SAN / NAS Applications. I'm using 4x500G SATA in Zraid (1) in my test environment exporting:

*4G FC Target LUN's
*1G iSCSI Target LUN's
*CIFS Network File Storage
*Apple TimeMachine Backup Stoage

And it's all working perfectly, never had a noticable issue with IO performance either, slowest seems to be CIFS but that's perfectly understandable.

Have never run accross this script before and given the amount of stuff going on it's probably worth having a look whats going on in terms of the ARC.

Thanks guys!
 
Back
Top Bottom