Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

Looks like I have missed the boat on these old models. The new ones should arrive in the next few weeks.

Has anyone used a Adaptec RAID card with these servers? Is it a straight connection to the backplane?

Thanks
 
Ok, write speeds are a bit low on the RAID array, can you enable write caching and retest? It should be under Device Manager -> Disk Drives -> Policies.

If it's still low, break the array and test the F4's individually.


Hi Zarf,

I've double checked and write caching is enabled already on the array.
Out of curiosity I turned it off and re-tested. This resulted in a 1mb/s improvement on yesterdays results, but that could just be smoke and mirrors, so I'll turn it back on.

I'll break the array and test the drives separately later tonight.

Thanks!

Andy.
 
Zarf,

I broke the array, and re-tested the disks separately, the results were the same!
So I checked the BIOS and the IDE settings were set to RAID and 3GB/s, so I changed it to RAID and AUTO. Retesting the individual drives, the write speeds were over 110 MB/s, so all looked good.
I remade the array in the raid controller, restarted, and I'm now back down to 24MB/s.

Now I'm really confused as I thought it was fixed!

Cheers for any help,

Andy.
 
Zarf,

I broke the array, and re-tested the disks separately, the results were the same!
So I checked the BIOS and the IDE settings were set to RAID and 3GB/s, so I changed it to RAID and AUTO. Retesting the individual drives, the write speeds were over 110 MB/s, so all looked good.
I remade the array in the raid controller, restarted, and I'm now back down to 24MB/s.

Now I'm really confused as I thought it was fixed!

Cheers for any help,

Andy.

err this isnt a hardware RAID solution but a mobo based software solution. Hence the CPU comes into it. That may be why its so pants at RAID.

Wanna sell it yet ;) ??
 
Just got it up and running, for the price the build quality of these is incredible! Now to get 2008 R2 on there.

Psu change. To 150 from 200 w or vice versa. No change in CPU

I'm not even sure the PSU will change, a lot of HP's marketing refers to the current model as having 200w so it may just be a continuation of that mistake.
 
err this isnt a hardware RAID solution but a mobo based software solution. Hence the CPU comes into it. That may be why its so pants at RAID.

Wanna sell it yet ;) ??

If it was Raid5 then maybe, but there's so little overhead with Raid1 then it shouldn't have any issues. It is FakeRaid, but should still give better results than that.

And it fits the bill perfectly, if I can't get Raid1 to work, I'll just backup the drive daily to the other, so not an issue.

Can't beat it, been looking for a solution for 12 months and this is the first thing to fit the bill, so I'll be keeping it ta :p
 
Would these things be any good for running VMs if I added more RAM? Is the RAM fairly cheap? I'd like to have a box at home to run maybe 4 VMs simultaneously (DC, SQL Server, SharePoint WFE, Windows 7 Dev box).

Would this be an ideal box for the job?
 
Hi,

My tests showed that the on-board fakeRAID performed no better (and maybe a little worse) than the software RAID built into Windows Server 2008 R2 64bit. This was true for both RAID 0, 1 and surprisingly software RAID5 (when compared to on-board RAID0).

Config:
1x 160GB system drive in ODD bay connected to on-board SATA port.
4x 1TB WD Green WD10EARS in the drive trays configured as AHCI in BIOS and software RAID5 (i.e. striped with distributed parity) in Windows.

BTW, CrystalDiskMark 3.0.1 reports 232 MB/sec for seq reads and 4.5 MB/sec(!) for seq writes with 1GB files on the RAID5 array.

I know that I get sustained write throughput of ~50MB/sec when copying large files across my GigE network to this array. So there's clearly something wrong with this benchmark (a big clue was the sub 10% CPU utilization during the write tests).

However, what I really want to do is use my Highpoint RocketRAID 2300 card in this box. I can't because of the mini-SAS connector (AKA SFF-8087) on the backplane cable (I know this can be removed, but it's too messy for my liking). I've looked into other RAID5 capable cards with SFF-8087 connectors, the cheapest is £120 i.e. as much as I paid for the MicroServer.
 
Would these things be any good for running VMs if I added more RAM? Is the RAM fairly cheap? I'd like to have a box at home to run maybe 4 VMs simultaneously (DC, SQL Server, SharePoint WFE, Windows 7 Dev box).

Would this be an ideal box for the job?

Maybe not ideal, but cheap and cheerful. There's some in depth reviews of running both ESXi and XENServer on the net...

http://www.google.co.uk/search?hl=e...server+lab+server&aq=f&aqi=&aql=&oq=&gs_rfai=

ServersPlus offered a bundle for just this purpose - it included the ODD, Extra RAM and VMware ESXi on a USB stick. I'm sure they'll re-instate the bundle when the new SKU comes through.
 
Last edited:
I'll mount the 160GB at the top, does that mean I boot using AHCI and ignore RAID completely, I'll delete the current config and set up the stripe in Windows.

The on-board SATA and eSATA ports actually hang off an integrated IDE controller; so Windows will use and IDE driver (rather than AHCI). Performance is more than adequate - I get over 100MB/sec seq. reads and writes to the supplied 160GB disk.
 
My WHS install went ****-up, wouldn't have minded but I hadn't even really used so dumped it and have now gone back to my original W7 Pro x64 install. Even with only 1Gb RAM the thing is quite responsive, currently copying all the data off my 1tb NAS onto the drives in the Microserver.
 
Quick question, I'm running Server 2008 R2, gigabit lan when copying a file to the server from my Mac I get around around 37mb/sec fluctuating to 46mb max, thought it would be much higher!
 
Quick question, I'm running Server 2008 R2, gigabit lan when copying a file to the server from my Mac I get around around 37mb/sec fluctuating to 46mb max, thought it would be much higher!

What's the disk configuration on the server?

From my WHS to the Microserver I get:
~57MB/sec copying to C: (160GB drive on internal SATA)
~50MB/sec to D: (4x 1TB drives in Windows RAID5)

From Microserver to WHS I get:
~75MB/Sec (4x 750GB, effectively JBOD)

From my WHS to my Windows 7 workstation I get:
~62MB/sec copying to C: (60GB Vertex 2 SSD)

From this I conclude that:

1. CPU is limiting factor for inbound transfers to RAID5 array.
2. CIFS/SMB performance on Windows Server 2008 is not as good as Windows Server 2003.

There are lots of complaints on the Vail (AKA 'WHS 2.0') beta forum about point 2. It's strange because Windows Server 2008 (and Vista) introduced some auto tuning features that were supposed to improve file copy performance.
 
Last edited:
Back
Top Bottom