I found freenas ZFS implementation to be quite flakey - I got my pool all set up and everything ran find for a few days (testing) then it completely lost the config and couldn't reimport the pool.
The down side of using freenas or freebsd is that they are many many versions behind the latest and greatest in open solaris.
I am currently running b134 opensolaris and it contains lots of features and bug fixes that I think make it worth going with the dev build. I've been running a 4tb pool for 2 months now and it's been perfectly reliable.
ZFS is an amazing filesystem - if it wasn't for the messing about with licences and shutting down the open solaris project it would be perfect. My current plan is to run b134 of open solaris until Solaris 11 is released - it should then have a new enough version of ZFS to import my current filesystem.
Regarding using disks of different sizes in one pool - there are various ways of doing it that give you various benefits. The link above gives you 100% of the space but at the cost of it being a nightmare (even for massive geeks) to manage/upgrade/recover. The solution I've gone with is to have two raidz groups within one pool, as follows:
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c4t9d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c4t8d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
c4t12d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t11d0 ONLINE 0 0 0
c4t10d0 ONLINE 0 0 0
The original setup was 4 * 300gb disks in raidz1-0 (first single parity group) and 4 * 1tb disks in raidz1-1 (second single parity group). This gave ((4-1)*300) + ((4-1)*1000) = (900 + 3000) = 3900 usable space (roughly, you don't lose exactly 1 disk but it's close enough.)
I've now started replacing the 300gb disks (mainly because they are old and loud) with 1tb disks - when all 4 are replaced the first raidz will be able to use the full size and the pool will become around 6tb usable.
I've got two choices for future upgrades from here - either replace 4 1tb disks with 2tb+ disks or adding additional raid1/mirrors to the pool. The only thing you have to be aware of before you start is that you cannot, no matter what you do, remove groups from the pool - if you accidentally add a single disk as another top level group you are buggered: the only way out is to trash the array. This is the one downside at the moment with zfs and it's unlikely to be fixed any time soon because enterprise customers don't care about this limitation - they tend to add dozens of disks at a time, they don't need to remove a disk to put another in, they just add a bunch more.
Compared to the hardware raid I had before I love zfs - it's incredibly fast both reading and writing (280mb/s average copying from one filesystem in the pool to another and only limited by gigabit network when reading/writing across the network) and replacing failed disks and rebuilding the array is trivially easy (like drobo really). I haven't got hotswap bays so I need to shut down to swap disks but if you did it would work 90% the same as drobo - pull out smallest or failed disk, put the new one in, find out what vdev the new drive has been allocated then the zpool replace <pool> <old vdev> <new vdev> and off it will go rebuilding the array. It is also much quicker rebuilding than my hardware array ever was. It took about 2 hours to resilver 940gb when I replaced one of the 1tb drives.
By comparrison drobo is expensive, slow and worst of all totally inscrutable. There have been numerous reports of drobos just losing all the data one day with no apparent cause/reason - if this happens you've got no way to recover at all. One of the handy things with ZFS is all the config for the pool is contained on the disks in the pool - if my machine dies all I need to do is plug the drives into another system, boot a live cd with a new enough version of zfs and do zpool import and it should pick it all up - even if some of the disks are missing it is possible to import as long as enough parity data exists.
What oracle have done to open solaris is very annoying but the latest dev build 134 seems very stable and I don't see any reason for running away from it just because it's the last of it's kind.
Mmmmmm, rambling...