My ESXi homelab

Man of Honour
OP
Joined
20 Sep 2006
Posts
33,993
nfortunately the official shelves for the servers are about £100 each, but I wanted it all neat so bit the bullet anyway.

I'm not sure if you can tell, but the QNAP NAS sits on a tray next to the little 8 port switch. The two trays will go into the two spare U slots however the top of the QNAP protrudes a few mm over the bottom most U. I'm sure I can make it work, just have to look into it when they arrive which may be some time as I suspect they're coming from the US via the UK distributor I use for SuperMicro kit.

The racking kit arrived yesterday so I got to work putting the servers into them. I was expecting some plastic molded type of shelves, however they're of a high quality metal and were simple to put together. The only thing is that there's only enough screws to put the servers in, one of them broke so it would have been nice to have a few spare.

The pictures aren't the best as it was getting dark and light isn't brilliant in the room.

The rack kit pre-server install:

giLwgki.jpg

There's two small brakets which screw onto each server and these brakets then screw onto the rack kit. Then there's a little bracket to hold in the PSU. Quite neat! The first two in.

View from the rear with the first one installed.

hKaXlyd.jpg

Then the second one in and job complete.

9qKVeUF.jpg

09ehWIr.jpg

As I thought the QNAP was encroaching into the U that the bottom server rack kit was to take up, so I've had to remove it and just leave the QNAP on the floor.

FHCFLuw.jpg

Very. very happy. Just needs a bit of tidying at the back cable wise now which I'll do another day.

I now have an electric smart meter and I'm happy to say that when they're running a few VMs it added about 1p an hour to my electric bill. I can deal with that!
 
Associate
Joined
25 Jul 2019
Posts
85
@ChrisD.

Just bumping this as I am investigating setting up a home lab again so interested on how you are getting on with it all. Back in the day (when I used to do VMware architecture & consultancy) I had a full blown lab with more enterprise type kit, a HP C3000, an F5 LoadBalancer and a NetApp 3140 with a couple of trays, etc. My requirements have now changed as I've moved away from VMware across to more cloud orientated stuff (AWS, GCP, Containerisation, etc).

I'm trying to weigh up whether its worth separating out the storage and compute (so something lightweight like a NUC and then a QNAP presented over NFS) or an all in one (so ESXi and then present local storage with something like FreeNAS or equivalent). I will still run ESXi as the core either way. One of my current projects at work is OpenStack so I'd layer this over the top of CentOS VM's for example.

Loving your setup by the way, I don't think I need 10GB on the backend however (yes I am jealous :cool:)
 
Associate
Joined
28 Feb 2008
Posts
472
Location
Northamptonshire
@ChrisD.

Just bumping this as I am investigating setting up a home lab again so interested on how you are getting on with it all. Back in the day (when I used to do VMware architecture & consultancy) I had a full blown lab with more enterprise type kit, a HP C3000, an F5 LoadBalancer and a NetApp 3140 with a couple of trays, etc. My requirements have now changed as I've moved away from VMware across to more cloud orientated stuff (AWS, GCP, Containerisation, etc).

I'm trying to weigh up whether its worth separating out the storage and compute (so something lightweight like a NUC and then a QNAP presented over NFS) or an all in one (so ESXi and then present local storage with something like FreeNAS or equivalent). I will still run ESXi as the core either way. One of my current projects at work is OpenStack so I'd layer this over the top of CentOS VM's for example.

Loving your setup by the way, I don't think I need 10GB on the backend however (yes I am jealous :cool:)

It all boils down to your budget.

I was solely running off a whitebox build with i7-3820 & 64GB with a P410 raid controller and everything nested up until very recently.

I now have added a SuperServer 5028D-TN4T with 128GB plus a 4 bay NAS as my main work testing area (still nesting), with the old box still running my media servers & home infrastructure.

The NUC will be limited to 32GB, but I have seen over at virtually ghetto that people are reporting running 64GB in them.
 
Associate
Joined
25 Jul 2019
Posts
85
It all boils down to your budget.

I was solely running off a whitebox build with i7-3820 & 64GB with a P410 raid controller and everything nested up until very recently.

I now have added a SuperServer 5028D-TN4T with 128GB plus a 4 bay NAS as my main work testing area (still nesting), with the old box still running my media servers & home infrastructure.

The NUC will be limited to 32GB, but I have seen over at virtually ghetto that people are reporting running 64GB in them.

I don't have a budget so to speak, I'm quite open in that respect (obviously I don't want to spend a ridiculous amount). I'm leaning towards a 'big box' approach as opposed to separation as on the backend my bandwidth will be limited if I go for external storage. I luckily have a spare bedroom so not too fussed about keeping it micro or keeping the noise down, it can hum away in a corner. That 5028D looks very sweet I must say, I'm a huge fan of SuperMicro (I used to spec up some custom supermicro NVMe servers via a reseller over here called Broadberry).

The debate now is whether I pick up something like a PowerEdge (or equivalent so it can run esxi 6.7, doesn't necessarily have to be on the HCL) although I am now seeing a lot of builds based on Ryzen, etc. A must for me is to have local storage, so a couple of SSD's for 'speedy' VM's, and large SATA's on a decent RAID controller. I still would like to use VMFS. Ideally I don't want to rely on a separate VM for presenting out the storage (so having FreeNAS running in a VM for example). I'm worried about raw throughput and having a dependancy on a VM to present the main storage for other VM's is a concern. Like you I'd want 128GB in the server.

I think I may have answered my own question :o:D the world of homelab has certainly changed from back when I was doing it!

*edit

I am now looking at a ThreadRipper setup! :rolleyes:
 
Last edited:
Associate
Joined
28 Feb 2008
Posts
472
Location
Northamptonshire
I don't have a budget so to speak, I'm quite open in that respect (obviously I don't want to spend a ridiculous amount). I'm leaning towards a 'big box' approach as opposed to separation as on the backend my bandwidth will be limited if I go for external storage. I luckily have a spare bedroom so not too fussed about keeping it micro or keeping the noise down, it can hum away in a corner. That 5028D looks very sweet I must say, I'm a huge fan of SuperMicro (I used to spec up some custom supermicro NVMe servers via a reseller over here called Broadberry).

The debate now is whether I pick up something like a PowerEdge (or equivalent so it can run esxi 6.7, doesn't necessarily have to be on the HCL) although I am now seeing a lot of builds based on Ryzen, etc. A must for me is to have local storage, so a couple of SSD's for 'speedy' VM's, and large SATA's on a decent RAID controller. I still would like to use VMFS. Ideally I don't want to rely on a separate VM for presenting out the storage (so having FreeNAS running in a VM for example). I'm worried about raw throughput and having a dependancy on a VM to present the main storage for other VM's is a concern. Like you I'd want 128GB in the server.

I think I may have answered my own question :o:D the world of homelab has certainly changed from back when I was doing it!

*edit

I am now looking at a ThreadRipper setup! :rolleyes:

The 12 generation Dell servers are now hitting EOS/EOL, so you should start seeing a lot of those hitting the usual places.

Try https://uk.labgopher.com/ if your heading down that route.
 
Man of Honour
OP
Joined
20 Sep 2006
Posts
33,993
This lab is changing somewhat, I will post full details once the migration has completed and I've written a blog on it.

However I've upgraded my gaming PC to a 3960X along with 256GB of RAM (still waiting on the RAM). I've created 4 ESXi nodes within VMware Workstation and deployed a vCenter Server directly onto Workstation.

The Supermicro's gave me a total GHz capacity (give or take) of 35, the four nested hosts have 4x vCPU each and it's showing at 60 GHz. Since it's vCPU and not real, it isn't really as simple as that, whoever so far with the few VMs I've tried it seems to work really, really quickly. There's also the added benefit that I can just close Workstation and suspend the VMs, whereas with the servers it was a ballache to shut them all down hence increased electricity bills.
 
Soldato
Joined
10 Oct 2005
Posts
8,706
Location
Nottingham
The 12 generation Dell servers are now hitting EOS/EOL, so you should start seeing a lot of those hitting the usual places.

Try https://uk.labgopher.com/ if your heading down that route.

I picked up a cheap R620 the other day as another vSphere host. 1x 6c12t CPU and 48GB of ram for ~£120. Throw in a bit of storage and it will be fine for what I want, isn't noisy at all (unless its booting) and doesn't draw very much power.
 
Man of Honour
OP
Joined
20 Sep 2006
Posts
33,993
Just a little bit of RAM.

7ENX5sc.jpg

All installed. I know, the loop is poor and I need to clean the top rad, but it's functional.

edgQFvz.jpg

Yes, I do need the RAM, plenty more VMs to deploy, here is the nested VMware lab idling.

roV5cWO.png

For those interested, it's 4x virtual ESXi hosts, 8 cores each and 60 GB of RAM. Storage wise it's running vSAN on an nVME drive I have, plus it has an iSCSI LUN I've presented from my QNAP NAS. Also in Workstation is the vCenter appliance. Within the nested environment I have a Veeam Windows VM as well as vRA 8 (3x VMs).

I use it for professional development (I work for VMware). I like the fact I can just suspend it all and fire up a game. Sadly if I don't suspend it (and I've only tried Apex Legends) I get huge spikes of low FPS so not really sustainable.
 
Back
Top Bottom