My ESXi homelab

Man of Honour
Joined
20 Sep 2006
Posts
35,597
Hi all,

After gaining my VCP6.5-DCV I decided to pursue the VCP6-NV. To do so, I decided to not bother with a nested environment and go for a proper homelab capable of 10Gbit. Using the VMUG subscription gives me access to vSphere Enterprise Plus, vSAN, NSX, vRo licences etc so I thought I'd build a little vSAN cluster.

I know a lot of people run Intel NUCs, but they lack 10Gbps NICs and are limited to 32GB or RAM per node. A vSAN enabled host consumes around 10GB of RAM initially, plus the 10GB requirement for the vCenter server doesn't leave much headroom for other VMs. While great little boxes, I wanted something capable of more. The hosts I am using are SuperMicro SYS E200-8D micro servers. They are small (think thin client size), have a 6 core/12 thread 1.9 Ghz Xeon processor, take up to 128GB of ECC RAM (or 64GB non ECC), feature two 10Gb NICs as well as another two 1Gb NICs, plus IPMI which allows remote console access. They're not cheap sadly, coming in at around £800 a node for which doesn't come with RAM or any HDDs. :(

Luckily I managed to get hold of some cheap RAM which gave me 64GB in total for each host. I then bought three Crucial 1 TB SSDs to act as the vSAN capacity tier and three 250 GB Samsung nVME SSDs for the caching tier.

Finally I needed a switch capable of 10 Gbps. Since I was already invested in the UniFi range, I went for the 16-XG. This gives me 3x 10Gbit RJ45 interfaces since the servers are copper only plus a 4th as a downlink to the rest of my network. Already having bought a QNAP NAS, I bought a dual 10Gb SFP+ PCI card and some DACs which occupy 2 of the remaining SFP+ ports. The NAS acts as a Plex server as well as some automation for home media. The reason I wanted it at 10Gb is to quickly backup the VMs using Veeam, plus also use it as a datastore if I've managed to break the vSAN storage. Throughput wise on the NAS I see between 250-350 MB/s which isn't too shabby for four spinning rust 6TB WD Reds.

Unfortunately the SuperMicro servers are far from quiet, I would not recommend them if you're having them in the same room unless you like 40mm fan noise. You can fit Noctua fans, but since the CPUs heatsink is passive, they don't provide enough static air pressure to keep the CPU cool under medium and higher load, so stick with the stock ones! SuperMicro do offer a HSF assembly which does fit, however I haven't tried it yet.

I did eventually pass my VCP6-NV, next on my list is VCAP deploy in both the DCV and NV tracks, although I'm going to wait until March for the new 2019 exams to come out.

This is how it all looks at the moment after moving it all upstairs to a spare bedroom.

bHPl0hO.jpg


Once the door is closed I can't hear any fan noise so I'm happy it's moved up there from being in the corner of the living room in a tiny closet getting somewhat toasty!

This is what the vCenter environment looks like. It's pretty baron at the moment as I've been building NSX while blogging it which adds a fair amount of time to the whole process! As you can see I'm already consuming ~82 GB of RAM across the cluster which does not leave much room if I were limited to 32GB of RAM per host. However to be completely fair it is possible to reduce the RAM that vCenter requires as well as remove the NSX VM memory limits.

zPr9u7m.png


I have started a blog if anyone wants to have a look, it's early days yet and the articles I've written so far need a little work still but have generally received positive feedback on Reddit. When vExpert 2020 opens I hope to apply and be awarded it, which would be extremely cool.

https://virtualinsanity.org/

The next step is to get a small 19" cabinet and tidy it all, maybe something on casters so it can be wheeled around as I may at some point do some demonstrations etc with it.

If anyone has any VMware questions please let me know, I've been working with it for over 10 years now so I've learnt a fair amount!

Thanks for reading.
 
Last edited:
Nice - I was only discussing yesterday in another thread that I got rid of my old bulky homelab and would like something smaller and these may well fit the bill - we're very similar background too - I am studying for the VCAP Deploy, but haven't done the VCP-NV yet, but did re-deploy NSX this weekend in production so am seriously considering going for it.

I also like the idea for a smaller vSAN Cluster at home - currently POC'ing a large production vSAN Cluster so would be good to get one on the go as it's been a few years since I've deployed one and that would tie in nicely with the VCAP.

Good luck with vExpert, I keep meaning to do something like this in VMWare/Microsoft/Cisco but can never get the time to do blogging. Hoping to take my first VCAP exam at VMWorld.
 
Thank you for that - I'd spotted the servers in your pic in another thread and I wondered what make/model they were.
 
As above I bought a three year VMUG subscription. Veeam will also give you a NFR licence but annoyingly it only allows two sockets so currently I can only backup VMs on two of the hosts.
 
Nice - I was only discussing yesterday in another thread that I got rid of my old bulky homelab and would like something smaller and these may well fit the bill - we're very similar background too - I am studying for the VCAP Deploy, but haven't done the VCP-NV yet, but did re-deploy NSX this weekend in production so am seriously considering going for it.

I also like the idea for a smaller vSAN Cluster at home - currently POC'ing a large production vSAN Cluster so would be good to get one on the go as it's been a few years since I've deployed one and that would tie in nicely with the VCAP.

Good luck with vExpert, I keep meaning to do something like this in VMWare/Microsoft/Cisco but can never get the time to do blogging. Hoping to take my first VCAP exam at VMWorld.
Cheers! Oddly enough there was a fair amount of 'normal' switching in the VCP6-NV exam which I wasn't expecting. Probably to catch people off guard I guess!

If you know CCNA or above level networking, there's just the security to get your head around plus some example diagrams and faults to work through. I remember when I sat down for the exam I got to question 1 and I didn't have a clue. I thought fine, gather your thoughts, 2 will be fine. Again, no clue! Same again for 3, by that point I was thinking what on earth?!? But luckily I sailed through the rest and got 399/500.
 
Cheers! Oddly enough there was a fair amount of 'normal' switching in the VCP6-NV exam which I wasn't expecting. Probably to catch people off guard I guess!

If you know CCNA or above level networking, there's just the security to get your head around plus some example diagrams and faults to work through. I remember when I sat down for the exam I got to question 1 and I didn't have a clue. I thought fine, gather your thoughts, 2 will be fine. Again, no clue! Same again for 3, by that point I was thinking what on earth?!? But luckily I sailed through the rest and got 399/500.

Yeah they recommend a CCNA level knowledge. I’m heavily into Cisco at work so shouldn’t be too difficult.
 

So £200 a year?

Ok thanks, I am personally moving my few personal servers to proxmox from ESXi and when I get round to buying the amd ryzen hardware, for my home virtual server, that will goto proxmox as well. I started to get annoyed with the lock out of certain features in ESXi free and the very small supported list of hardware, plus I think £200 a year is excessive for what I use it for as its non commercial use.
 
This is really interesting. My homelab has grown a fair bit over the years as I need to keep up to date with the various products my day job entails and I've learnt VMware just by frigging around and reading blog posts similar to your own. I need a lot of VMs and am now up to 4 hosts and 38 VMs with a spare server waiting to become either FreeNAS or another unRAID box. Took the plunge on the VMUG annual licence and running everything on ESX 6.7 U! compared to 5.5 is really smooth. Must admit I've hit a wall with the newer 6.7 functionality and I don't really have the time to invest in learning more about VMware as I'll never use it in my regular work.

I've added 4-port NICs to each host and picked up a HP ProCurve 1810G - 24 from eBay as well as the all-important differently coloured Cat6 cables. All the hosts are using local datastores but may dip my toe into shared storage on the spare box - I think VSAN is probably overkill right now, as is HA. I don't have a router so debating reading up on pfSense. Also every VM is on my home network so need to get my head around VLANs (which the switch is capable of). I've got 3 of the nodes in a cluster but am basically just clicking buttons now when it comes to DRS and Distributed Switches and the like.

Haven't managed to royally screw everything up yet, but would like to get vMotion traffic between the hosts just to use particular NIC ports and switch ports. Also want some of the VMs to run on their own subnets but I'm banging up against that wall here.
 
If you want to dive into the networking side - I would definitely look at utilising NSX which you should have with your VMUG License, and you can do SDN instead of traditional networking. In addition, definitely get one of your Interfaces doing dedicated vMotion. We're scoping out getting a 10gig backplane because vMotion is so slow in our environment because of bandwidth limitations.
 
So £200 a year?

Ok thanks, I am personally moving my few personal servers to proxmox from ESXi and when I get round to buying the amd ryzen hardware, for my home virtual server, that will goto proxmox as well. I started to get annoyed with the lock out of certain features in ESXi free and the very small supported list of hardware, plus I think £200 a year is excessive for what I use it for as its non commercial use.
ESXi isn't aimed at a home user, it's aimed at enterprise and it offers class leading features, so doesn't come cheap. I used Proxmox once and while it did the job, I found the UI pretty poor in comparison.
 
Getting the vMotion traffic isolated is top of my list as I doubt I'll be touching NSX if I'm honest. Each of the motherboard NICs on the hosts work just fine with ESXi 6.7 surprisingly, so I'm left with 4 1GB NICs to play with. Tagging port #4 in vCenter as vMotion traffic is what I have been doing so far but then I started looking at clustering the hosts and using distributed switches and got lost a bit. Part of my problem is not knowing the best approaches with the kit I've got, really.
 
Getting the vMotion traffic isolated is top of my list as I doubt I'll be touching NSX if I'm honest. Each of the motherboard NICs on the hosts work just fine with ESXi 6.7 surprisingly, so I'm left with 4 1GB NICs to play with. Tagging port #4 in vCenter as vMotion traffic is what I have been doing so far but then I started looking at clustering the hosts and using distributed switches and got lost a bit. Part of my problem is not knowing the best approaches with the kit I've got, really.

What kit do you have exactly (including networking) and what do you want to achieve? For home use 1 Gbit if fine for vMotion, just be a little patient. :)

I run everything on the same vmnic, so management, vSAN and vMotion, it's not ideal but it's a homelab, so doesn't need to be 'perfect'. My servers are copper only and the RJ45 SFP modules for my switch are about £150 or so, each!

On another note I've just ordered a Startech 19" rack, a PDU, a shelf for the QNAP and I'll be ordering the Supermicro shelves for the servers, should neaten it all up somewhat!
 
Lovely little setup.

How do you find the cert exams? If you've been through any others, how do they compare in your opinion?
 
Here's what I've got. My day job is systems management and monitoring so have been trying to build a mini-datacenter over the years. The VMs are a mixture of Windows and Linux flavours and the others monitor those instances as well as the hypervisors. Some of those guys on Reddit actually have a home datacenter, which is pretty amazing.
  • ThinkServer TS140 Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz Logical Processors: 4 (32GB RAM). Onboard NIC + 4 1GB NICs
  • ThinkServer TS140 Intel(R) Xeon(R) CPU E3-1226 v3 @ 3.30GHz Logical Processors: 4 (32GB RAM) Onboard NIC + 4 1GB NICs
  • HP Z620 Workstation Intel(R) Xeon(R) CPU E5-2650 @ 2.00GHz Logical Processors: 32 (96GB RAM) Onboard NIC + 4 1GB NICs
  • HP Z620 Workstation Intel(R) Xeon(R) CPU E5-4620 v2 @ 2.60GHz Logical Processors: 32 (96GB RAM) Onboard NIC + 4 1GB NICs
  • (Unused) ThinkServer TS140 Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz Logical Processors: 4 ( 32GB RAM) Onboard NIC + 4 1GB NICs
  • HP ProCurve 1810G - 24. This switch is connected to my home network using a homeplug as the servers are upstairs and no way to run a cable.

What I'm stuck on:

Each host shares a connection to the 24-port switch. Would like to have vMotion traffic not interfere with the home network as I do move VMs around a lot.
For the other spare NICs going into the switch, can I team them up for better throughput between themselves?
 
Can I also ask which QNAP you've got? I had to sacrifice a TS140 when the last Z620 came as the VMUG license is restricted to a (quite reasonable) 6 sockets. I quite like the idea of shoving 4 disks in and that's it. I do need some shared storage, but reading the FreeNAS forums does not fill me with confidence from their users.
 
Lovely little setup.

How do you find the cert exams? If you've been through any others, how do they compare in your opinion?

Tough but fair. The questions are all based on VMware material you can find online. If you have a bit of experience, watch some training vids and get the official cert book and you should be fine. I found Microsoft ones annoying and Cisco ones again tough but fair.

What I'm stuck on:

Each host shares a connection to the 24-port switch. Would like to have vMotion traffic not interfere with the home network as I do move VMs around a lot.
For the other spare NICs going into the switch, can I team them up for better throughput between themselves?

Create a dedicated VLAN for vMotion traffic. If you mean LACP for teaming then it's largely pointless IMO.

Can I also ask which QNAP you've got? I had to sacrifice a TS140 when the last Z620 came as the VMUG license is restricted to a (quite reasonable) 6 sockets. I quite like the idea of shoving 4 disks in and that's it. I do need some shared storage, but reading the FreeNAS forums does not fill me with confidence from their users.
It's a TVS-671 although I am now looking at rackable QNAPs.
 
On another note I've just ordered a Startech 19" rack, a PDU, a shelf for the QNAP and I'll be ordering the Supermicro shelves for the servers, should neaten it all up somewhat!

Rack arrived, built it up quickly thanks to my trusty Bosch electric screwdriver (honestly, it's one of the best things I've ever bought!). The cabinet is of high quality, has carry handles plus wheels, very impressed with it. Also got a patch panel and some other bits.

Here's the rack built which I did last weekend. The fact it fits under the desk is a complete fluke!

hyGI7nX.jpg


Tonight some more parts arrived so I decided to get started with it.

JoLyH0z.jpg


More bits in.

WZmYiL6.jpg


Nearly there.

psoV0Zl.jpg


All cabled in and powered up.

4UTDtgt.jpg


It has a 6 way PDU at the back of it, so the only two cables coming out of the rack is a power cable, plus the downlink to another UniFi switch which lives downstairs with my USG Pro 4. There will be another as I plan on using the little 8 port for POE to the AP-Pro which I intend on putting on the upstairs ceiling for better coverage (it's currently mounted on a wall downstairs which isn't ideal).

Much, much neater than what it was and it's highly portable if I ever do any demos or what not. Very, very happy. I'll do a BOM once I write a blog about it but it's come to around £300-350 for everything except the server rack shelves which considering the quality of the cabinet I'm very happy with. Unfortunately the official shelves for the servers are about £100 each, but I wanted it all neat so bit the bullet anyway.

I'm not sure if you can tell, but the QNAP NAS sits on a tray next to the little 8 port switch. The two trays will go into the two spare U slots however the top of the QNAP protrudes a few mm over the bottom most U. I'm sure I can make it work, just have to look into it when they arrive which may be some time as I suspect they're coming from the US via the UK distributor I use for SuperMicro kit.
 
Back
Top Bottom