So after moving house and getting all of my lab kit from the loft into a rack in the spare room, I discovered (the somewhat expensive way) that servers generate lots of heat (!). The power consumption I've been used to, but putting a fully loaded Dell 2950, two other servers, and lots of infrastructure into a small bedroom is a recipe for 45 centigrade plus ambient temperatures. Definitely not healthy, and probably not very safe either.
So after a lot of thought and research I came up with a plan:
Ditch the 2950 and build a new SAN server, and refresh the two ESXi hosts with new generation hardware. The 2950 was running 32GB memory and dual Xeon E5345 CPUs at the cost of about 300W. This is 8 year old hardware and the performance, which seemed pretty decent, was so ridiculously inefficient I'm now quite shocked. A single one of these Xeons scores just 2991 points on passmark.
The two ESXi servers used much more recent hardware, namely Gigabyte GA-990XA-UD3 boards, 32GB memory and AMD FX-6300 CPUs. It's still decent hardware but it does generate quite a bit of heat and the AMD chips aren't that frugal with the electricity. Each host used between 150 and 200 Watts typically.
I decided that the vSphere side of the lab was grossly over-spec'd. Most of the time I'd be using 5% of the CPU capacity and it would never go above 25%. It was handy having plenty of CPU and cores to throw at virtual machines but what's the point if it's not needed. On that basis I went shopping for the lowest power multi-core Haswell CPU I could find that I felt I could get away with, and motherboards to match.
Motherboards were actually the hardest part to sort out. I needed enough PCIe slots to run an x8 10GbE card, an x4 fibre channel HBA and an x4 SATA/SAS HBA. CPU support for x8/x4/x4 isn't a problem, but there aren't many boards for an enthusiast on a budget. Plenty of nice Supermicro options, but just try getting one in a hurry in the UK. There's also the issue of finding onboard NICs on the VMware HCL, or having enough spare PCIe slots/lanes to run cards. Then of course I needed a board for the SAN server, which needed x8 and x4 for a SATA/SAS HBA and an FC HBA with either good onboard GbE NICs that would work in Illumos or enough slots to add some. Oh, and ECC memory support. Outside of Supermicro there are very, very limited options.
Anyway, here's what I ended up with:
The SAN server makes use of the ASRock board and ECC memory, and the ESXi hosts use the MSI boards and the existing DDR3. I then ran into a few issues:
The ASRock board supports the latest Haswell 'refresh' CPUs - but only with the latest BIOS. The board I received came with the original BIOS, which needed an early Haswell CPU fitted in order to update the BIOS - which I did not have. I considered buying a CPU on a temporary basis but found a local company in Basingstoke whose name is a bit like those spiny round things on strings, and they very kindly updated the BIOS for me. Lovely.
I had to ditch NexentaStor and go back to OmniOS and Napp-it due to some instability with the former, but after getting it built and migrating a VM back over, the performance was every bit as good as it was on the old Dell - but now with a fraction of the power consumption and almost no heat output:
That's upwards of 40k IOPS from commodity hardware and a weedy little i3. That's awesome
Next up came the ESXi hosts which were equally problematic:
The first of the two MSI boards I tried just wouldn't POST at all. The fans would come but not even the power LED would light. I tried all sorts of different hardware combinations but no dice. I swapped everything over to the other MSI board and all was well again, so the first board was replaced the following day and the second server built, although the replacement board had a couple of bent pins in the CPU socket that needed attention - not very impressed with MSI build quality I have to say.
So that's about it so far. I had a nightmare getting vMotion to work again (caused by forgetting to match the frame size on the 10GbE adapters, not that you'd have guessed this from the cryptic error in vCenter) but everything is now running as it was. I also took the opportunity to rearrange the rack based on items I knew that were heat sensitive and/or generated significant heat. To that end the FC switch is now bottom front and the power switches are bottom rear, and there's 1U of clear space between each server.
So has it been worth it? Yes. Absolutely yes. My previous power consumption was 850W+ and the heat was so severe I couldn't run the PE2950 for more than a few hours at a time. Here's the new power consumption at the UPS and the wall:
440-460W. That's a comfortable 400W saving, or nearly 50% more efficient. The temperature is much better as well, with ambient not going much above 30 centigrade so far. SAN performance remains stellar and after tuning the resource allocation to each VM the performance is equally stellar there too.
Although it's cost a bit of money I can now have everything running normally again. I'm absolutely bowled over by how well a baby i3 CPU handles everything I'm throwing at it. Sure, when it comes to virtualization memory is king and CPU cores are a close second, so when there are i5 chips at a similar price with similar TDP I'll probably upgrade, but for now 2 cores with HT and prudent resource allocation is enough to keep my 18 VMs running nicely.
Thanks for reading, hopefully this might help others looking to run a (relatively) inexpensive home lab. Looking forward to the 'zomg why do you need so much stuffz' comments
A few more pics here: http://imgur.com/a/Mmxbv
So after a lot of thought and research I came up with a plan:
Ditch the 2950 and build a new SAN server, and refresh the two ESXi hosts with new generation hardware. The 2950 was running 32GB memory and dual Xeon E5345 CPUs at the cost of about 300W. This is 8 year old hardware and the performance, which seemed pretty decent, was so ridiculously inefficient I'm now quite shocked. A single one of these Xeons scores just 2991 points on passmark.
The two ESXi servers used much more recent hardware, namely Gigabyte GA-990XA-UD3 boards, 32GB memory and AMD FX-6300 CPUs. It's still decent hardware but it does generate quite a bit of heat and the AMD chips aren't that frugal with the electricity. Each host used between 150 and 200 Watts typically.
I decided that the vSphere side of the lab was grossly over-spec'd. Most of the time I'd be using 5% of the CPU capacity and it would never go above 25%. It was handy having plenty of CPU and cores to throw at virtual machines but what's the point if it's not needed. On that basis I went shopping for the lowest power multi-core Haswell CPU I could find that I felt I could get away with, and motherboards to match.
Motherboards were actually the hardest part to sort out. I needed enough PCIe slots to run an x8 10GbE card, an x4 fibre channel HBA and an x4 SATA/SAS HBA. CPU support for x8/x4/x4 isn't a problem, but there aren't many boards for an enthusiast on a budget. Plenty of nice Supermicro options, but just try getting one in a hurry in the UK. There's also the issue of finding onboard NICs on the VMware HCL, or having enough spare PCIe slots/lanes to run cards. Then of course I needed a board for the SAN server, which needed x8 and x4 for a SATA/SAS HBA and an FC HBA with either good onboard GbE NICs that would work in Illumos or enough slots to add some. Oh, and ECC memory support. Outside of Supermicro there are very, very limited options.
Anyway, here's what I ended up with:
- 3x Intel i3-4160 Haswell 'refresh' CPUs
- 2x MSI Z87-G45 motherboards
- 1x ASRock H87WS-DL motherboard
- 1x 32GB ECC memory kit
- 3x Silverstone 500W 80+ Gold PSUs
- 1x LSI SAS 9211-4i HBA + breakout cable
- 1x 4u chassis with hot swap bays
The SAN server makes use of the ASRock board and ECC memory, and the ESXi hosts use the MSI boards and the existing DDR3. I then ran into a few issues:
The ASRock board supports the latest Haswell 'refresh' CPUs - but only with the latest BIOS. The board I received came with the original BIOS, which needed an early Haswell CPU fitted in order to update the BIOS - which I did not have. I considered buying a CPU on a temporary basis but found a local company in Basingstoke whose name is a bit like those spiny round things on strings, and they very kindly updated the BIOS for me. Lovely.
I had to ditch NexentaStor and go back to OmniOS and Napp-it due to some instability with the former, but after getting it built and migrating a VM back over, the performance was every bit as good as it was on the old Dell - but now with a fraction of the power consumption and almost no heat output:
That's upwards of 40k IOPS from commodity hardware and a weedy little i3. That's awesome

Next up came the ESXi hosts which were equally problematic:
The first of the two MSI boards I tried just wouldn't POST at all. The fans would come but not even the power LED would light. I tried all sorts of different hardware combinations but no dice. I swapped everything over to the other MSI board and all was well again, so the first board was replaced the following day and the second server built, although the replacement board had a couple of bent pins in the CPU socket that needed attention - not very impressed with MSI build quality I have to say.
So that's about it so far. I had a nightmare getting vMotion to work again (caused by forgetting to match the frame size on the 10GbE adapters, not that you'd have guessed this from the cryptic error in vCenter) but everything is now running as it was. I also took the opportunity to rearrange the rack based on items I knew that were heat sensitive and/or generated significant heat. To that end the FC switch is now bottom front and the power switches are bottom rear, and there's 1U of clear space between each server.
So has it been worth it? Yes. Absolutely yes. My previous power consumption was 850W+ and the heat was so severe I couldn't run the PE2950 for more than a few hours at a time. Here's the new power consumption at the UPS and the wall:
440-460W. That's a comfortable 400W saving, or nearly 50% more efficient. The temperature is much better as well, with ambient not going much above 30 centigrade so far. SAN performance remains stellar and after tuning the resource allocation to each VM the performance is equally stellar there too.
Although it's cost a bit of money I can now have everything running normally again. I'm absolutely bowled over by how well a baby i3 CPU handles everything I'm throwing at it. Sure, when it comes to virtualization memory is king and CPU cores are a close second, so when there are i5 chips at a similar price with similar TDP I'll probably upgrade, but for now 2 cores with HT and prudent resource allocation is enough to keep my 18 VMs running nicely.
Thanks for reading, hopefully this might help others looking to run a (relatively) inexpensive home lab. Looking forward to the 'zomg why do you need so much stuffz' comments

A few more pics here: http://imgur.com/a/Mmxbv
Last edited: