Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

Having a slight issue running FreeNAS on my micro server. Basically whenever I add an album to my music library Foobar2k doesnt pick it up - to solve it i have to go onto the server and check the 'Set permission recursively' then its fine until i add another album.

Any ideas?
 
Okay so with the end of the month approaching I've been trying to think about what route to go with the server... In particular what to do about populating it with drives. I certainly can't afford to by 4/5/6x WD Red 2/3Tb drives all in one go, but based on my understanding of ZFS I feel like there might be an optimal approach to how to do this.

The problem is that since I can't change the number of disks in the array (for RAIDZ) or buy enough disks to make the array in one go, how can I have my array but keep it upgradeable? I've currently got the following drives:

1x 250Gb (from the microserver)
1x 320Gb (from an old Dell PC I threw out years ago)
1x 1Tb (the storage drive in my current server)
1x 500Gb (in my main PC as a 3rd drive, don't use it for an awful lot)

So lets say I go out and buy 1x WD Red 2Tb to start me off... If I were to swap the 500Gb and the 1Tb over - giving my main PC access to the current storage files for transferring in the process - and then create a 4-drive RAIDZ array out of the 2Tb, 500Gb, 320Gb and 250Gb, then I'd have about 750Gb of usable space (and a lot of wasted space)... But then over the next few months I could progressively replace the smallest drive with a WD Red 2Tb and let the array rebuild, and I'd go from ~750Gb usable to ~960Gb, then ~1.5Tb and finally to ~6.0Tb when the final drive gets replaced. Does that sound right? Am I overlooking any glaring flaws with this idea? Is it bad for the drives etc?

Not sure if it's what your looking for but I used DrivePool in my server. It allows to you add disks to the pool as and when you acquire them. For instance I have 2 3Tb Red's in my server at the mement set to mirror each other but when they get full I can add another 2 drives in and add those to the pool. It's basically a software Raid setup.

Cheers

SM
 
Not sure if it's what your looking for but I used DrivePool in my server. It allows to you add disks to the pool as and when you acquire them. For instance I have 2 3Tb Red's in my server at the mement set to mirror each other but when they get full I can add another 2 drives in and add those to the pool. It's basically a software Raid setup.

Thanks for the advice but Drivepool is a Windows thing though right? I'm looking for a Linux solution as I intend to use ZFS (which also acts as a software based RAID) with an OS such as FreeNAS or NAS4Free
 
Okay so with the end of the month approaching I've been trying to think about what route to go with the server... In particular what to do about populating it with drives. I certainly can't afford to by 4/5/6x WD Red 2/3Tb drives all in one go, but based on my understanding of ZFS I feel like there might be an optimal approach to how to do this.

The problem is that since I can't change the number of disks in the array (for RAIDZ) or buy enough disks to make the array in one go, how can I have my array but keep it upgradeable? I've currently got the following drives:

1x 250Gb (from the microserver)
1x 320Gb (from an old Dell PC I threw out years ago)
1x 1Tb (the storage drive in my current server)
1x 500Gb (in my main PC as a 3rd drive, don't use it for an awful lot)

So lets say I go out and buy 1x WD Red 2Tb to start me off... If I were to swap the 500Gb and the 1Tb over - giving my main PC access to the current storage files for transferring in the process - and then create a 4-drive RAIDZ array out of the 2Tb, 500Gb, 320Gb and 250Gb, then I'd have about 750Gb of usable space (and a lot of wasted space)... But then over the next few months I could progressively replace the smallest drive with a WD Red 2Tb and let the array rebuild, and I'd go from ~750Gb usable to ~960Gb, then ~1.5Tb and finally to ~6.0Tb when the final drive gets replaced. Does that sound right? Am I overlooking any glaring flaws with this idea? Is it bad for the drives etc?

That is the accepted method for resizing a raid array, as far as I know it's not bad for drives but you do have to be aware that should a drive did while you're rebuilding any data on the array would be lost. Iirc rebuilding arrays can be fairly hard on drives and can sometime cause premature failure if the drive is already on its way.
 
That is the accepted method for resizing a raid array, as far as I know it's not bad for drives but you do have to be aware that should a drive did while you're rebuilding any data on the array would be lost. Iirc rebuilding arrays can be fairly hard on drives and can sometime cause premature failure if the drive is already on its way.

Yeah I guess that's my concern, especially since I assume the 250Gb drive that comes with the server could be any old piece of junk... the 320Gb has already lived in a machine for several years as it's OS drive and the 500Gb was originally taken out of an external caddy that had a fair amount of use too... But of course assuming the WD Reds are good quality each time I replace and get through a rebuild the remainder of the array has 1 less old possibly dodgy drive so the risk goes down, hmmmmm

In terms of rebuilding though I thought a ZFS rebuild was relatively conservative in that it doesn't have to rebuild any of the empty sectors (i.e. it only rebuilds the parts that contain data)?
 
I've got no idea to be honest, it seems advised in a lot of places as the best practice so I can't see it being harsh on drives at all.

Providing there's no bad sectors on any of the old drives and they're healthy according to smart I'd go for it.

I'm in the same position as you as far as drive costs are concerned but as I have the capacity to have 14 drives connected I'm just running 2x1tb in a pool for now and I'll just gradually collect the 8 drives I need and throw another in the pool should I run low on space.
 
I've got no idea to be honest, it seems advised in a lot of places as the best practice so I can't see it being harsh on drives at all.

Providing there's no bad sectors on any of the old drives and they're healthy according to smart I'd go for it.

I'm in the same position as you as far as drive costs are concerned but as I have the capacity to have 14 drives connected I'm just running 2x1tb in a pool for now and I'll just gradually collect the 8 drives I need and throw another in the pool should I run low on space.

14 Drives... in a microserver :eek:? I'm assuming you must be talking about some other bit of server kit surely?

I think I probably will go for it - I still intend to have duplicate copies of my most important files on another device anyway so in an absolute worst case scenario I just lose some movies and episodes which I could replace if I needed to

I'm also still torn between getting ESXi setup and having a FreeNAS VM, versus just not bothering and installing FreeNAS directly... Mostly for the reason that - would I really use the capability to run other VMs? I don't know... In my head I thought trying out other Linux distros via VMs would be useful, but realistically will the microserver be any good for that? If it's a graphical desktop it would rely on using some sort of VNC viewer and would the microserver handle that smoothly? (Besides which for that purpose I could install virtualbox or something on my main PC)
 
14 Drives... in a microserver :eek:? I'm assuming you must be talking about some other bit of server kit surely?

I think I probably will go for it - I still intend to have duplicate copies of my most important files on another device anyway so in an absolute worst case scenario I just lose some movies and episodes which I could replace if I needed to

I'm also still torn between getting ESXi setup and having a FreeNAS VM, versus just not bothering and installing FreeNAS directly... Mostly for the reason that - would I really use the capability to run other VMs? I don't know... In my head I thought trying out other Linux distros via VMs would be useful, but realistically will the microserver be any good for that? If it's a graphical desktop it would rely on using some sort of VNC viewer and would the microserver handle that smoothly? (Besides which for that purpose I could install virtualbox or something on my main PC)

I forget I'm posting in the Microserver thread, mines an AM3 based build, I won't actually use 14 drives, Im aiming for 8 drive RAIDZ2 and whatever is stored on my current pool will be moved across to the array when I have enough drives

I've found running ESXi handy, I don't particularly like transmission as a Torrent client and I had a lot of issues with CouchPotato running from within a FreeNAS jail so I now have both of those running on Server 2012.

The main concern running VMs isn't cpu speed as its unlikely they will do anything particularly intensive it will be having enough RAM. Its not like running ESXi will do any harm and you will find a use for it if it is there. You do need to bear in mind you will need a 5th drive to use as your ESXi datastore as well, I have an old 120Gb 2.5" sata drive serving that purpose at the moment.
 
The main concern running VMs isn't cpu speed as its unlikely they will do anything particularly intensive it will be having enough RAM. Its not like running ESXi will do any harm and you will find a use for it if it is there. You do need to bear in mind you will need a 5th drive to use as your ESXi datastore as well, I have an old 120Gb 2.5" sata drive serving that purpose at the moment.

I figured I'd just buy a 64Gb SSD or something along those lines for that... It's still a very difficult decision though... Part of me thinks the N54L will be an amazingly solid NAS, so maybe I should stick to that and down the line put together a second machine that's more of a workhorse server that runs a bunch of VMs... But that's more expensive ARRRGGH lol :p

I wonder could you take a 128Gb SSD say and partition it in such a way that it can act as the datastore for some VMs, the "ZIL" for ZFS and the write-cache (I forgot the name) for ZFS all in one drive - that would be pretty nifty... The problem with all this is you're buying hardware and parting with cash but only a bit at a time and working towards a goal you aren't entirely sure of - if I had all the components to play with right now I'd just try installing several setups and decide which I like best :rolleyes:
 
You'd need three seperate drives and you can only pass an entire drive rather than individual partitions through to ESXi.

I was going to buy a Microserver but figured I'd outgrow it pretty quick so I went with a cheap AMD solution recycling an old case. It cost me around £130 for a Sempron 190 x2, 8Gb Ram, Board and 500w Corsair PSU. When I upgrade my main pc I can recycle the motherboard into the server so I can stick another 8gb of memory in and either pick up a Athlon X4 620e or similar off ebay or just stick my Phenom II in the server, should I need more cpu grunt.
 
is the n54l worth the upgrade from the n40l?
I just got an N54L for £85 after cash-back, and it's a solid upgrade to my N36L. Roughly double the CPU performance, doubled my RAM (consolidated the RAM from both of them), and it's a lot quieter (that may be coincidence, or it may be that one of the fans gets noisier over time). All that, and I now have an N36L to strip for parts if/when I ever need to. Or viewed another way, I've got another 3+ years of MicroServer goodness ahead of me.
 
I have both. CPU is certainly faster but that's about it. I've not noticed a decrease in noise though but then both servers are on together now in an ESX cluster.
 
Using mine as a HTPC and it has started playing up recently. If I leave it idle for a few hours there is no sound output. On the sound properties section in CP you can see the green bar showing activity but nothing comes out of the speakers. Need to restart to solve it. Using HDMI output to a TV from a 6450. Any ideas?
 
SSD should fit underneath the Optical bay, so you should be fine.

Bear in mind though, I believe there is only a limited number of onboard SATA sockets, so you may be limited to 5 drives total anyway.
 
Thanks - it was the SATA ports I was worried about. A guide seemed to suggest the SSD used the SATA port for the optical drive. Aren't 4 of the 5 SATA ports at the back of the drive enclosures? Can you gain access to the non-hot plug SATA ports conventionally? Ideally I would like an SSD, Blu-ray, the HD that came with it (to be replaced by a 2/3/4 TB hard drive), my current 1TB hard drive (also to be replaced at a later date) and a possibility to add one/two more if required.
 
The 4 ports that go to the back of the drive enclosures got to a single internal mini-sas connector on the motherboard.
You would have to run extension cables from the connector in one of the enclosures to your SSD/BD-ROM. You could potentially unscrew the connector from the back of the enclosure, but I don't think there is enough slack in the cable to be able to run it up to the ODD bay, so you'd need extensions any way.
 
Hhhmmm I might leave the optical for now. The SSD is more important anyway. Every time I look into what SSD to get I get nowhere. Looked at SanDisk's Pulse/Ultra/v300 range which seem to do well but then I hear stories of their reliability not been great. Then some friends said if I can get one in the same price range with something other than a SandForce controller - go with that. Decisions, decisions.
 
Back
Top Bottom