Good Cheap Server - HP Proliant Microserver 4 BAY - OWNERS THREAD

I didnt break any clip seals on the seagate unit either. Just used 2 really small flat screw driver heads.

Popped the base cover off the Seagate unit easy, there are no seals etc that break, and as long as your careful it doesn't cause any damage/marks, so when I put the cover back on you cannot tell it's been removed.
 
Last edited:
Lets also look at the best practice route, RAID is no replacement for backup... I'm sure you know that though right ;)

Its the exact reason my Gen8 is setup the way it is.

Windows 2012 r2, Essentials Role

256Gb SSD for OS.
256Gb for VM store
3x4TB WD Red Data
1x6TB WD Red Data
1x500Gb SSHD Dump

The data drives are all running from the onboard B120 as single drives, then use DrivePool and Scanner. I can take any drive out of the machine, put it into another machine and the data is still there as each drive has copies of complete files as its just normal NTFS, DrivePool allows me to control duplication and can be set to keep 2 copies of files or a copy on each drive that is part of the pool.

Yes there is the added layer of having a Windows OS on the box but you can also add Hyper-V to give you a VM running DSM if you have a requirement for that on other machines/TV's/Media players etc.
 
Oh OCUK have the bare drives available too for £180.00

https://www.overclockers.co.uk/seag...-hard-drisk-drive-st8000as0002-hd-330-se.html

OcUK are missing the newer "NAS" drives though.

Seagate 8TB NAS HDD SATA 6Gb/s 256MB Cache Internal Hard Drive
Mfr#: ST8000VN0002

Be interesting to find out what the real differnces are between ST8000VN0002 and ST8000AS0002, the price is loads different £180 vs £320

I have to wonder how they test the MTBF to be 91 years (800k hours) sustained use.

Would be interesting to challenge that in 90 years :D




One difference I can see straight away is that it has twice the cache. Don't know what else.
 
Both my N54l and T20 are using ESXI neither of them had monitors apart from the initial box setup and everything is then managed remotely on my laptop or main pc.

It's quite manageable without but I spend a lot of time away from home so being able to remotely resolve more fundamental issues is quite important to me. I could live without iLO but I would rather not. My main reason for choosing the G8 though was that I already had a Xeon 1230 v2 that needed a new home. Next box will possibly be the Dell.
 
Lets also look at the best practice route, RAID is no replacement for backup... I'm sure you know that though right ;)

Its the exact reason my Gen8 is setup the way it is.

Windows 2012 r2, Essentials Role

256Gb SSD for OS.
256Gb for VM store
3x4TB WD Red Data
1x6TB WD Red Data
1x500Gb SSHD Dump

The data drives are all running from the onboard B120 as single drives, then use DrivePool and Scanner. I can take any drive out of the machine, put it into another machine and the data is still there as each drive has copies of complete files as its just normal NTFS, DrivePool allows me to control duplication and can be set to keep 2 copies of files or a copy on each drive that is part of the pool.

Yes there is the added layer of having a Windows OS on the box but you can also add Hyper-V to give you a VM running DSM if you have a requirement for that on other machines/TV's/Media players etc.


My server isn't really used for backup. It's mostly used for media storage. mkv's and mp3s. Photo's too, but they are a backup from the photos on my PC so I have them all safe still. (and ive copied them to my external 4TB drive for now until I get my NAS back up properly once decided what to do with drives etc.
 
Don't even think about running 8TB disks in RAID 5 - when one goes, the rebuild time will be over a week probably. Factor in the odds of a URE in that time and boom goes the array.
 
Not all 8TB drives are created equal...

Raid 5 on the enterprise grade drives... go for it!

The archive drives are a nightmare... and slow like you say.

8TB enterprise drive will rebuild in under 48 hours. Possibly under 24.
 
Not all 8TB drives are created equal...

Raid 5 on the enterprise grade drives... go for it!

The archive drives are a nightmare... and slow like you say.

8TB enterprise drive will rebuild in under 48 hours. Possibly under 24.

On an Enterprise controller with decent cache, no load etc. - possibly. On a B120i not a hope in hell.

4TB drives in a DX4000 rebuilt in about 3 days.
 
It does depends on the controller - but my migration from full 3tb drives to 4tb drives in a gen8 with stock controller and running xpenology has been taking 12-18 hours to rebuild.

On the flip side, first build and expansion of a 6 drive 3tb array on a Highpoint 2720 is taking nearly 48 hours a pop.

Pro SANs I've worked with have been prioritised for speed rather than data density with 10k or 15k drives and large caches.
 
Last edited:
Does anybody know if the

Anker® Uspeed PCI-E to USB 3.0 2-Port Express Card

is compatible with xpenology?

And can a PCIe card be installed without having to take the motherboard out?
 
And can a PCIe card be installed without having to take the motherboard out?

Er, that would be tricky.

But you can just unplug everything, slide it forward, put the card in, slide it back and plug everything back in.
Quite easy although some wires are a little fiddly to get out (PSU cable).
 
Er, that would be tricky.

But you can just unplug everything, slide it forward, put the card in, slide it back and plug everything back in.
Quite easy although some wires are a little fiddly to get out (PSU cable).

Ok, thanks for that. I wasnt sure if there was enough clearance to push the motherboard back in with a pcie card plugged in - i.e. wasnt sure if other parts needed removing to the left of the hard drive cage. But I'm assuming not, from what you've said?
 
I've had a SATA Card in there without issue, half-height is required though.

Not seeing a half-height plate for the card you suggested - May be worth looking for one that's half height or get a dremel and old PCI card you don't need :D
 
I've had a SATA Card in there without issue, half-height is required though.

Not seeing a half-height plate for the card you suggested - May be worth looking for one that's half height or get a dremel and old PCI card you don't need :D

Good point! One of the reviews mentioned low profile, but no mention of it in the specs anywhere. Now ordered a dynamode instead. Hopefully it's compatible with xpenology.
 
How would you suggest I do this guys with the least down time possible ...

HP Gen 8 with 4 x 320GB disks in RAID 10 which I have 3 VM's running on.

I'm running out of space so I want to replace the disks with 4 x 500GB's again in RAID 10 as I have a load of these on the shelf.

Is there any easy way to migrate the RAID or am I looking at backing up and restoring? The VM's are all are in active use so I dont want to keep them out of action too long
 
Quickest (and better practice) - backup, test backup, delete array, replace disks, create array, restore backup.

No downtime - 4 rebuilds while replacing the disks one by one, then expand the array.

Then expand the volume to make use of the additional array capacity.
 
Back
Top Bottom