Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

is it worth spending the extra £ for the Reds or will normal low RPM/green disks survive?

My current server has been running 3x1.5TB Samsung drives for a few years non-stop now and I only had one fail (DOA). Data content is almost exclusively media and steam/non critical backups

I have been sticking with Reds mainly due to the extra year warrantee plus not much more money then WD Greens. Seagate's are quite a bit cheaper but then the RMA is rubbish.
 
Got my cash back through the other day :) 6-7weeks later.

One thing I am struggling with tho is its ability to run anything as well as server.

So current setup is standard WD Blue HDD's, ESXI running off USB, 8gb Ram. Ive got Windows Server 2012 Essentials running in vm at the moment and regardless of how much ram I allocate to it or the cpu core settings, whenever I try to create another vm (even just xp or ubuntu) it grinds to a halt. Its as if the processor just isn't powerful enough to cope with both running at once.

I've tried everything I can think of and its left me abit dissapointed :(

Any advice? Have any of you got it successfully running Server and something else? If so what configuration have you got?
 
..snip... Make sense? :cool:

Your OS VM data is not going to take much space on your array really, the chunk of your media data is on the array. It's backup of this data that's the next big step for your decision really whether it be from RAID options or copying off to another box/attached storage etc.

Does make perfect sense, cheers :D In terms of my media data and after some reading around I'm leaning towards having a nas4free VM, so I can use ZFS to introduce some redundancy. What I can't decide is exactly how this should be configured... I could put 2 drives as a simple mirror, and the remaining 2 (or 3 if I put 2 in the OD bay) striped - so I'd have a smaller are with redundancy to keep more important files, and a larger area for keeping things I'm less attached to... Something to think about I guess, as well as the fact that I probably won't be able to afford 4 or 5 big drives all in one go, and as far as I know RAIDZ arrays can't be extended with additional discs (at least not while keeping the existing data in-tact) (correct me if I'm wrong)

Reading through this thread I love the sound of how a program I can't remember the name of ("something drivepool" I think) worked, where you can treat your drives as one big pool but flag certain files or folders to be always stored in 2 physical locations... but I think this is a WHS only option and I couldn't find any reference to a setup that would work in that way under a linux/bsd install (unless anyone can suggest one :rolleyes:)

If ESXi goes wrong, or the USB stick it's installed to fails, you just put in a new one and reinstall ESXi.
You will likely have to re-register your VM's (Browse to the datastore and right click the vmx file and select 'add to inventory') but you won't lose anything (As long as you weren't mad enough to create a VM on the same USB stick)

Sounds totally fine to me, thanks :)
 
The ONE thing I love about the Gen8 is the ILO, but I can live without it.

If it's just the basic 'HP iLO Standard' then it is somewhat limited for IRC, but useful if you want to power the box on/off. Integrated Remote Console (IRC/Virtual KVM, support text, and graphics) is Pre-OS only. AFAIK for full IRC you'll need to purchase 'HP iLO Advanced' for £300 ish.
 
If it's just the basic 'HP iLO Standard' then it is somewhat limited for IRC, but useful if you want to power the box on/off. Integrated Remote Console (IRC/Virtual KVM, support text, and graphics) is Pre-OS only. AFAIK for full IRC you'll need to purchase 'HP iLO Advanced' for £300 ish.
£300ish -- eek!
 
So the £60ish Remote Access card for the G7 (don't know if it is an ILO card) is actually really good value?!
 
Picking up one of these and some WD reds as I have decided I need a more robust backup solution than an external drive for each computer when I remember. Thinking about an OS, freenas seems the best fit for my requirements. Is there anything else I should think about based on the below needs:

1. Storage of media files for playback on my HTPC/laptops (Windows 7 and OSX)
2. Ability to schedule regular backups of folders from both Windows and OSX machines (mainly photos these days)
3. I would be interested in Time machine support for two macbook airs for sheer laziness in the event of laptop failure, but its not a major requirement
4. Ability to grow storage over time - I think i'll start with 2*3tb drives, but the ability to add more would be useful.
 
Picking up one of these and some WD reds as I have decided I need a more robust backup solution than an external drive for each computer when I remember. Thinking about an OS, freenas seems the best fit for my requirements. Is there anything else I should think about based on the below needs:

1. Storage of media files for playback on my HTPC/laptops (Windows 7 and OSX)
2. Ability to schedule regular backups of folders from both Windows and OSX machines (mainly photos these days)
3. I would be interested in Time machine support for two macbook airs for sheer laziness in the event of laptop failure, but its not a major requirement
4. Ability to grow storage over time - I think i'll start with 2*3tb drives, but the ability to add more would be useful.

If you do go for Freenas be aware of the limitations of adding space and plan ahead.

Assuming you're going to use RAIDZ (raid 5) then say you start with 3x3tb to give you 6tb space and the ability to lose one drive, great. But what you can't do is later add another drive into the mix, ie 4x3tb.

What I did was start with 2x1tb+2tb to give myself 2tb usable space. As I could afford them I replaced the 1tb drives with 2tb and expanded the array. I now have 4tb usable space with fault tolerance

Of course if you don't need fault tolerance it doesn't matter, you just add drives and the space grows.
 
If you do go for Freenas be aware of the limitations of adding space and plan ahead.

Assuming you're going to use RAIDZ (raid 5) then say you start with 3x3tb to give you 6tb space and the ability to lose one drive, great. But what you can't do is later add another drive into the mix, ie 4x3tb.

What I did was start with 2x1tb+2tb to give myself 2tb usable space. As I could afford them I replaced the 1tb drives with 2tb and expanded the array. I now have 4tb usable space with fault tolerance

Of course if you don't need fault tolerance it doesn't matter, you just add drives and the space grows.

Hmm this is how I thought it sounded like it worked... Is there any way to "trick" the array into thinking that 2 partitions on one drive are actually 2 physical drives... i.e. get 2x3Tbs and 1x1Tb but make the array think it's 2x500Gbs... giving you initially 1Tb usable space... then gradually replace the "2" drives with 3Tb drives? Of course it's a bit of a dumb thing to do as if the 1Tb drive goes down the array is broke... hmmm yeah, think I answered my own rambling there!
 
Placed my order, think it works out about £100 after the cashback, not too shabby :) If I can sell off some of my existing stuff it might work out costing almost nothing overall... (Got a Fractal Design Array R2 with a crappy Intel Atom board in it... might get close to £100 for it if I'm lucky)
 
If you do go for Freenas be aware of the limitations of adding space and plan ahead.

Assuming you're going to use RAIDZ (raid 5) then say you start with 3x3tb to give you 6tb space and the ability to lose one drive, great. But what you can't do is later add another drive into the mix, ie 4x3tb.

What I did was start with 2x1tb+2tb to give myself 2tb usable space. As I could afford them I replaced the 1tb drives with 2tb and expanded the array. I now have 4tb usable space with fault tolerance

Of course if you don't need fault tolerance it doesn't matter, you just add drives and the space grows.

Thanks for that. I need to have a think about what to do regarding buying/configuring drives.

Just to check my understanding, you can expand the size one disk at a time, but don't get the benefit of larger drives until all the disks are replaced? But what you can't do is start with 4 disks and add a 5th?

Seems 4*1tb or 4*2tb would give me 3tb/6tb of space with some fault tolerance, and then I could expand by buying 4 new drives and replacing one by one down the line if i need more space.
 
Thanks for that. I need to have a think about what to do regarding buying/configuring drives.

Just to check my understanding, you can expand the size one disk at a time, but don't get the benefit of larger drives until all the disks are replaced? But what you can't do is start with 4 disks and add a 5th?

Seems 4*1tb or 4*2tb would give me 3tb/6tb of space with some fault tolerance, and then I could expand by buying 4 new drives and replacing one by one down the line if i need more space.

You're absolutely correct. You will also have to wait between drive changes for it to rebuild the drive you have just added. The number of drives you start with is the number of drives in the array, and you cannot add a 5th if you start with 4 etc.

What you can do however is start with a 4x1TB array then build a second 4x1TB array and run them in a pool so it presents the two arrays as a single drive/array
 
Just to check my understanding, you can expand the size one disk at a time, but don't get the benefit of larger drives until all the disks are replaced? But what you can't do is start with 4 disks and add a 5th?

Seems 4*1tb or 4*2tb would give me 3tb/6tb of space with some fault tolerance, and then I could expand by buying 4 new drives and replacing one by one down the line if i need more space.

I'm fairly sure this is all correct... You can add extra drives to the ZFS "pool" but they can't become part of an existing RAIDZ array (unless as you pointed out they are replacing an existing smaller disk). So for instance if you had 4*1Tb (~3Tb available as storage) in a RAIDZ array you could add a 5th 1Tb disk to the ZFS pool and your available storage on the pool would go up to 4Tb, but the extra 1Tb you added would be just an individual drive and a failure of that new drive would not be protected by the redundancy of the other array. (Correct me if I'm wrong guys, I've been trying to research this myself also!)

What you could do for example is have 3x1Tb drives in a RAIDZ1 (RAID5) and then at a later stage add another 3x1Tb drives in a second RAIDZ1 which would give you a pool with 4Tb storage overall and something similar to a RAIDZ2 (RAID6) level of redundancy (though not quite, because you could only tolerate 2 simultaneous failures if they weren't on the same array).

While we're on the subject of ZFS, I've been reading around about using a ZFS based system within an esxi VM and there are quite a few places I've seen people giving pretty serious warnings about doing so... Mostly in the sense of "don't do this unless you know exactly what you're doing", "many people have lost all their data by doing this", etc. etc. and a lot of discussion suggesting you have to use RDM to get a NAS VM working correctly (and whether this is a good idea or not). Can anyone help clarify things a little here?
 
I'm fairly sure this is all correct... You can add extra drives to the ZFS "pool" but they can't become part of an existing RAIDZ array (unless as you pointed out they are replacing an existing smaller disk). So for instance if you had 4*1Tb (~3Tb available as storage) in a RAIDZ array you could add a 5th 1Tb disk to the ZFS pool and your available storage on the pool would go up to 4Tb, but the extra 1Tb you added would be just an individual drive and a failure of that new drive would not be protected by the redundancy of the other array. (Correct me if I'm wrong guys, I've been trying to research this myself also!)

What you could do for example is have 3x1Tb drives in a RAIDZ1 (RAID5) and then at a later stage add another 3x1Tb drives in a second RAIDZ1 which would give you a pool with 4Tb storage overall and something similar to a RAIDZ2 (RAID6) level of redundancy (though not quite, because you could only tolerate 2 simultaneous failures if they weren't on the same array).

While we're on the subject of ZFS, I've been reading around about using a ZFS based system within an esxi VM and there are quite a few places I've seen people giving pretty serious warnings about doing so... Mostly in the sense of "don't do this unless you know exactly what you're doing", "many people have lost all their data by doing this", etc. etc. and a lot of discussion suggesting you have to use RDM to get a NAS VM working correctly (and whether this is a good idea or not). Can anyone help clarify things a little here?

You're absolutely correct on the first part.

My FREEnas (ZFS) server runs as a VM in ESXi using RDMs and I've had not a single issue. Peformance takes a little hit but I still get 70Mbps reads over gigabit lan, FWIW I had never used ESXi previous to this. So unless I got lucky I would say they are wrong.
 
You're absolutely correct on the first part.

My FREEnas (ZFS) server runs as a VM in ESXi using RDMs and I've had not a single issue. Peformance takes a little hit but I still get 70Mbps reads over gigabit lan, FWIW I had never used ESXi previous to this. So unless I got lucky I would say they are wrong.

Well that's the same position I'll be in so quite reassuring... Did you try to do any sort of testing to make sure recovering the array works as intended? I was contemplating trying to grab a bunch of 3 or 4 old disks and try setting up the VM and a RAIDZ with 3 disks say, stick some test data on them and then unplug one disk to see what happens, try replacing it with a different disk and check that the array rebuilds correctly etc before I put in proper drives and start filling it with more important stuff. Can't decide if this is a bit too paranoid, I just figure when something goes wrong in the future it would be far less stressful at least having some experience of the recovery procedure
 
Well that's the same position I'll be in so quite reassuring... Did you try to do any sort of testing to make sure recovering the array works as intended? I was contemplating trying to grab a bunch of 3 or 4 old disks and try setting up the VM and a RAIDZ with 3 disks say, stick some test data on them and then unplug one disk to see what happens, try replacing it with a different disk and check that the array rebuilds correctly etc before I put in proper drives and start filling it with more important stuff. Can't decide if this is a bit too paranoid, I just figure when something goes wrong in the future it would be far less stressful at least having some experience of the recovery procedure

I'm only using a pool in FreeNAS at the moment, so I've not done any physical testing.
I did exactly that with virtualbox though. Created arrays, removed and replaced disks with larger drives to expand arrays etc and didn't notice any data loss. I was only putting around 50gb on the array when testing though. But it rebuilt correctly etc from within virtualbox
 
Back
Top Bottom