Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

Soldato
Joined
18 Oct 2002
Posts
6,673
Location
Leicestershire
I'd not bother with raid, I set my drives up a single drives, then used drive pool...
Use server essentials then add your machines using the connector, loads easier and will back them up for you.
Also add hyperv to play with..
 
Soldato
Joined
4 Mar 2008
Posts
2,608
So I've got all my drives changed over now from 4 x 320GB RAID 10 to 4 x 500GB RAID 10. Rebuild times were great, around 2hrs per swap. Had to do it switched off each time though as you have to press F1 at startup to rebuild and I couldn't find a rebuild option in the storage administrator.

My next problem is I cant find an option to expand the array in the storage administrator, only create a new logical drive using the new free space. Any ideas?
 
Soldato
Joined
4 Mar 2008
Posts
2,608
Looks like I cant expand an array on the B120i controller.

That gives me 3 options ...

Create a new logical drive from the free space and move a VM onto that.
Create a new logical drive from the free space and expand onto the free space using disk manager as a striped volume (eek!)
Nuke it, create a new array and restore from backup.

What would you do?
 
Soldato
Joined
5 Jan 2009
Posts
4,759
Hey all. Messing around with my server and having fun. Decided to run it as a DC and set up home essentials. In the mean time, I wondering what I should do with regards to second DC. For now, I don't want to go HyperV/ESXi as I don't want to grind the celeron to a halt. However, I'm thinking of perhaps creating a second DC on a VM on my client PC. Is this a stupid idea or could it work for a basic home domain? The PC is in my sig rig (2500k, 16GB RAM, SSD), and obviously, it would mean the second DC is not running 24/7. Would I run into many duplication issues?
 
Soldato
Joined
5 Oct 2004
Posts
7,395
Location
Notts
Hey all. Messing around with my server and having fun. Decided to run it as a DC and set up home essentials. In the mean time, I wondering what I should do with regards to second DC. For now, I don't want to go HyperV/ESXi as I don't want to grind the celeron to a halt. However, I'm thinking of perhaps creating a second DC on a VM on my client PC. Is this a stupid idea or could it work for a basic home domain? The PC is in my sig rig (2500k, 16GB RAM, SSD), and obviously, it would mean the second DC is not running 24/7. Would I run into many duplication issues?

You'll be fine doing that. Just make sure you check replication when you fire up the second DC :)
 
Soldato
Joined
5 Jan 2009
Posts
4,759
You'll be fine doing that. Just make sure you check replication when you fire up the second DC :)

Cool I'll give it a go. I've done it all before in a virtual environment, but this is my first attempt at setting up a live environment from scratch. I realise doing it this way is a bit messy, but I'm not going to having more than 1 or 2 domain users, and only 1 device connected. HyperV for two virtual DCs would be overkill I think.
 
Soldato
Joined
5 Jan 2009
Posts
4,759
I'm trying to pin down why randomly the server will makes lots of HDD noise. It sounds like masses of data is being written but when I check the performance monitor, there is nothing being written to or read from the drives, other than the SSD.

If I right click on the disks and look at deduplication properties, the drives suddenly go silent, but often then pipe up again.

I've only got about 50GB of data saved to the 8TB array thus far, and have one of the shares mapped to my client PC but I'm not directly accessing them when the noise occurs. Is there any other way I can check as to why the drives are being so noisy? The server is under my desk next to my main PC and have no where else to place it really so the noise will drive me mad if it's like that all the time when idling.
 
Associate
Joined
10 Jun 2014
Posts
227
I'm trying to pin down why randomly the server will makes lots of HDD noise.

As you have the P410 it will probably have disk scrubbing enabled by default. When idle there is a low priority scan that checks the disk/raid for errors. It's normal and a good idea to leave it on.

If you have very large, slower drives then it may start the next scrub as soon as the previous one finishes, although usually it will finish and become idle again until the next cycle is due.
 
Soldato
Joined
5 Jan 2009
Posts
4,759
As you have the P410 it will probably have disk scrubbing enabled by default. When idle there is a low priority scan that checks the disk/raid for errors. It's normal and a good idea to leave it on.

If you have very large, slower drives then it may start the next scrub as soon as the previous one finishes, although usually it will finish and become idle again until the next cycle is due.

OK, makes sense, and like you said, it's something to leave on. However, is there a way of making it run a little less often? The drives are 4TB SAS drives, but there's very little data on them yet.
 
Soldato
Joined
5 Jan 2009
Posts
4,759
OK, makes sense, and like you said, it's something to leave on. However, is there a way of making it run a little less often? The drives are 4TB SAS drives, but there's very little data on them yet.

OK, so believe I've found the setting you're referring to in the SSA.'Surface scan analysis priority' I think? It only has three options: disabled, high, and idle (with a 0-30s timer slider) I've temporarily disabled it to see if it is in fact the cause of the drive noise. As soon as a access the drive via the server or a mapped drive, the noise stops. As soon as I disconnect it starts up again. If the server stays quiet, I'll know of this is the cause, but I'm still at a loss as I really don't want to disable it.

Regarding write cache on the B120i, should it be enabled for SSDs?
 
Last edited:
Soldato
Joined
5 Jan 2009
Posts
4,759
Also, what should I set as the read/write cache ratio on my BBWC 1GB? It's currently at default 25/75.

Given that it has 1GB memory, and it's going to reading files quite often (being primarily a NAS) should I change it to 50/50?
 
Last edited:
Associate
Joined
10 Jun 2014
Posts
227
50/50 should be fine. It seems the scan when enabled is continuous, it would be less noticeable on SATA drives, SAS drives don't usually have any design concerning noise reduction, only vibration.
 
Soldato
Joined
5 Jan 2009
Posts
4,759
50/50 should be fine. It seems the scan when enabled is continuous, it would be less noticeable on SATA drives, SAS drives don't usually have any design concerning noise reduction, only vibration.

That's very true, never thought about that. Hmm, I wonder if I should leave it off and turn it on once a week or so. It's a shame I can't enable it from Windows. I'd have it on as scheduled task to run nightly instead.
 
Associate
Joined
4 Feb 2009
Posts
1,368
Guys, can I get a sanity check? I have an n54l and an n36l, with 6 drives between them. They're currently running very lightly used file services on one, while the other runs a heavy (terabytes) rsync overnight. Functions can't be combined on current set up.

Sanity check is, is it worth replacing hardware? Between them they must be burning between 60 and 100 watts. My new skylake burns less power while idle. Can anyone suggest a sane replacement (s)?

Am debating a bigger machine with vm's, vs new microservers that aren't based on 5+ year old tech, or.... I don't know. Ideas people?
 
Associate
Joined
3 Oct 2007
Posts
795
The microservers are pretty low power usage by themselves. I would venture that if you're using traditional hard disks they are going to be the biggest power users.

You could have course replace everything with SSD's, but it depends how much you want to spend...
 
Soldato
Joined
5 Jan 2009
Posts
4,759
Any other ideas on what I can check to improve my data throughput? I'm copying a 10GB single file from the local HDD and/or SSD to my server via a gigabit unmanaged HP switch. If I write on the server directly I get over 100MBps no bother, and similarly on the local machine.

Both NICs are confirmed running at gigabit so why, when I transfer a single large file over the network, am I only getting about 65-70MBps transfer?

Packet size is default 1500 and I've disabled Flow Control on both NICs as a test. I realise data throughput doesn't mean "I have gigabit network so I can transfer files at 1000Mbps" but surely if the source can read at 150MBps say, and the destination can write at 150MBps also, I should get a little faster throughput than what I'm currently experiencing? Duplex is set to auto negotiate at both ends, but as said, the switch and OSs are reporting gigabit speeds.

I am only using one of the two NICs on the server - would it be worth enabling the second and using the NIC teaming features?

Thanks.
 
Last edited:
Man of Honour
Joined
13 Nov 2009
Posts
11,596
Location
Northampton
Worth checking how good the network actually is either iperf

On Windows 7 with a particular driver in my workstation I would only see 750Mbit. With a different drive it's peaks at 970Mbit

I get transfer speeds of 105-110MB/s sequential over the network
 

Deleted member 138126

D

Deleted member 138126

Definitely benchmark the network with iperf. This rules out any inconsistencies with individual drives, or file fragmentation.
 
Back
Top Bottom