Home lab rack, please help

Permabanned
Joined
30 Oct 2016
Posts
258
Location
Frankfurt
Hi,

I have the following as my home lab:-

4 x HP DL320e v2 32GB RAM each
1 x Lsi 620J 24 x 2.5" SAS enclosure (my san) (three tiers of storage, 15k SAS, 10kSAS, SSD)
3 x HP 1810-24G switches
1 x Astaro Gateway
2 x APC Switchable PDUs with web interface
1 x Cisco 2800 series router

Now I need advice on two things:-

1) Im thinking about selling the DL320e v2 servers, and going for the new Gen9 DL20's, as they support double the RAM. Im looking to do a mini private cloud setup, with vRealize Automation suite, anyone got any ideas if I am going to need more resources, and the 64GB servers are a better idea?

2) Ive been in need of a suitable rack for AGES, but I have specific requirements and cant seem to find anything to meet them, apart from a company called ZPAS, whom I am not familiar with.

What I need is a 24U rack, black, with inbuild QUIET fans at the top that either automatically triggered with temperature, or just on all the time but dont sound like a tornado. A glass front door, with a rubber seal maybe, and the cabinet sealed with a filter or something at the bottom. I dont want a massive 1000m deep cabinet, 800x800 would be PERFECT.

ANyone got any suggestions?

Is my lab also lame with 32GB DL320e hosts, should I get DL20's instead?

I want to create a self service portal for myself, (SCCM/VrealizeAutomation etc) with scripts and templates to provision resources like VMs and virtual networks. Are 3x 32GB hosts enough? This could be 4 x 32GB hosts if I bought another low spec DL320e PentiumDual core or something, and just used it for the SAN storage via StorageServer 2012.....

Inputs welcome!

Im aware this is a lot of cash, but really this is such high level stuff, that the experience with this would pay for itself pretty quickly.

A good, quiet, nice looking rack with fans, dust protection, and a see through door would be a good start! I want to be able to see all the lights flashing, so one of those APC racks with wood finishes is way out of the question. Also maybe the DL20 Gen9s are quieter than the DL320e V2's ?
 
Last edited:
800 is ok.

THis is why I go with the smaller footprint servers like DL320e and DL20..

The dimensions are tiny, so I dont need really deep rack. The diskshelf is only 486mm deep, and thats the deepest kit I'd ever want for home.

So you think 3x (32GB, 4CPU Xeon) is enough to do all the vRealize?

I can buy another box, use rubbish CPU (dual core) for SAN, then it would be 4 x (32GB, 4CPU Xeon)

With DL20....there is huge cost, but 64GB per node.

The alternative is a single Zbook 15 G3, with 64GB ECC and big SSD storage and do it from bed, but for some reason the G3 is only with 1080p screen right now, which is no use.

PS My san uses MPIO and LCAP teaming on the HP Switch, works and load balances via round Robin from Esxi, get a confirmed 2.4GB/s to the Esxi hosts for some fast VM access.

All this only happened becuase of a price mistake, where I got a new SAS diskshelf for 280eur, and 10k SAS Seagate disks for 30 dollars each.
 
Last edited:
A DL320e is 75cm deep

Edit: Oh right, there's a short version as well. I still don't see what you're gaining over building some kit yourself, especially if you want something quiet.

I have lots of free ILO4 licenses... I love HP ILO, for remote management its so good, I do not know if I build my own box if I can have something as good? What do you recommend supermicro server, Intel server chassis?

What about the ILO? ILO4 rocks, even on the cheap servers.
 
What are you planning on using your lab for where you're in and out of iLO all day?

Supermicro boards have IPMI and/or vPro for management. If budget is an issue (doesn't sound like it is) then there's a bunch of those ASRock Rack boards that may or may not do the job.

I just think buying actual ProLiant servers for a home rack is a huge waste of money, unless they've fallen off a truck and you're getting them cheap. But maybe not even then if you need to spend a fortune on a cabinet to make them bearable to live around.

Power saving mainly, starting and stopping hosts, and using DPM on a cluster too to keep consumption low.
 
Back
Top Bottom