Refreshed my lab, AMD to Intel

Associate
Joined
1 Dec 2005
Posts
803
So after moving house and getting all of my lab kit from the loft into a rack in the spare room, I discovered (the somewhat expensive way) that servers generate lots of heat (!). The power consumption I've been used to, but putting a fully loaded Dell 2950, two other servers, and lots of infrastructure into a small bedroom is a recipe for 45 centigrade plus ambient temperatures. Definitely not healthy, and probably not very safe either.

So after a lot of thought and research I came up with a plan:

F7aUUD8.jpg


Ditch the 2950 and build a new SAN server, and refresh the two ESXi hosts with new generation hardware. The 2950 was running 32GB memory and dual Xeon E5345 CPUs at the cost of about 300W. This is 8 year old hardware and the performance, which seemed pretty decent, was so ridiculously inefficient I'm now quite shocked. A single one of these Xeons scores just 2991 points on passmark.

The two ESXi servers used much more recent hardware, namely Gigabyte GA-990XA-UD3 boards, 32GB memory and AMD FX-6300 CPUs. It's still decent hardware but it does generate quite a bit of heat and the AMD chips aren't that frugal with the electricity. Each host used between 150 and 200 Watts typically.

I decided that the vSphere side of the lab was grossly over-spec'd. Most of the time I'd be using 5% of the CPU capacity and it would never go above 25%. It was handy having plenty of CPU and cores to throw at virtual machines but what's the point if it's not needed. On that basis I went shopping for the lowest power multi-core Haswell CPU I could find that I felt I could get away with, and motherboards to match.

Motherboards were actually the hardest part to sort out. I needed enough PCIe slots to run an x8 10GbE card, an x4 fibre channel HBA and an x4 SATA/SAS HBA. CPU support for x8/x4/x4 isn't a problem, but there aren't many boards for an enthusiast on a budget. Plenty of nice Supermicro options, but just try getting one in a hurry in the UK. There's also the issue of finding onboard NICs on the VMware HCL, or having enough spare PCIe slots/lanes to run cards. Then of course I needed a board for the SAN server, which needed x8 and x4 for a SATA/SAS HBA and an FC HBA with either good onboard GbE NICs that would work in Illumos or enough slots to add some. Oh, and ECC memory support. Outside of Supermicro there are very, very limited options.

Anyway, here's what I ended up with:

HPXhvCll.jpg


  • 3x Intel i3-4160 Haswell 'refresh' CPUs
  • 2x MSI Z87-G45 motherboards
  • 1x ASRock H87WS-DL motherboard
  • 1x 32GB ECC memory kit
  • 3x Silverstone 500W 80+ Gold PSUs
  • 1x LSI SAS 9211-4i HBA + breakout cable
  • 1x 4u chassis with hot swap bays

The SAN server makes use of the ASRock board and ECC memory, and the ESXi hosts use the MSI boards and the existing DDR3. I then ran into a few issues:

YIGVwCzl.jpg


The ASRock board supports the latest Haswell 'refresh' CPUs - but only with the latest BIOS. The board I received came with the original BIOS, which needed an early Haswell CPU fitted in order to update the BIOS - which I did not have. I considered buying a CPU on a temporary basis but found a local company in Basingstoke whose name is a bit like those spiny round things on strings, and they very kindly updated the BIOS for me. Lovely.

aCMqUwyl.jpg


I had to ditch NexentaStor and go back to OmniOS and Napp-it due to some instability with the former, but after getting it built and migrating a VM back over, the performance was every bit as good as it was on the old Dell - but now with a fraction of the power consumption and almost no heat output:

wtJQ6J3l.png.jpg


That's upwards of 40k IOPS from commodity hardware and a weedy little i3. That's awesome :D

Next up came the ESXi hosts which were equally problematic:

H9ccYPBl.jpg


The first of the two MSI boards I tried just wouldn't POST at all. The fans would come but not even the power LED would light. I tried all sorts of different hardware combinations but no dice. I swapped everything over to the other MSI board and all was well again, so the first board was replaced the following day and the second server built, although the replacement board had a couple of bent pins in the CPU socket that needed attention - not very impressed with MSI build quality I have to say.

So that's about it so far. I had a nightmare getting vMotion to work again (caused by forgetting to match the frame size on the 10GbE adapters, not that you'd have guessed this from the cryptic error in vCenter) but everything is now running as it was. I also took the opportunity to rearrange the rack based on items I knew that were heat sensitive and/or generated significant heat. To that end the FC switch is now bottom front and the power switches are bottom rear, and there's 1U of clear space between each server.

6NDYNstl.jpg


So has it been worth it? Yes. Absolutely yes. My previous power consumption was 850W+ and the heat was so severe I couldn't run the PE2950 for more than a few hours at a time. Here's the new power consumption at the UPS and the wall:

rmpnboEl.png.jpg

2bdnDnAl.jpg


440-460W. That's a comfortable 400W saving, or nearly 50% more efficient. The temperature is much better as well, with ambient not going much above 30 centigrade so far. SAN performance remains stellar and after tuning the resource allocation to each VM the performance is equally stellar there too.

Although it's cost a bit of money I can now have everything running normally again. I'm absolutely bowled over by how well a baby i3 CPU handles everything I'm throwing at it. Sure, when it comes to virtualization memory is king and CPU cores are a close second, so when there are i5 chips at a similar price with similar TDP I'll probably upgrade, but for now 2 cores with HT and prudent resource allocation is enough to keep my 18 VMs running nicely.

Thanks for reading, hopefully this might help others looking to run a (relatively) inexpensive home lab. Looking forward to the 'zomg why do you need so much stuffz' comments ;)

A few more pics here: http://imgur.com/a/Mmxbv
 
Last edited:
Great post Shad. I found it interesting even though I don't really know much about servers and virtualization - its on my to do list.
Looks like a great result for increasing efficency and making your house less sauna like.
Oh yeah, why do you need so much stuff? ;)
 
Great post Shad!

400W @ 9.09p/kWh = £318.73 per year in electricity! That's a lot of money to save year over year...
 
Great post Shad and very informative, thanks for taking the time :)

40k IOPSs is very impressive from kit like that. What disks have you got in the SAN server and how many spindles? (looks like 10 from the pics)
 
No worries, a bit of info on the SAN then...

Largely owing to how the original PE2950 was configured, the SAN server is based on 6x Seagate 2TB 7200rpm SATA drives. They're basic consumer drives and they do fail, but so far I've only lost 2 in about 3 years - I can live with that. They're configured as 3 mirrors for a RAID10-like setup, but using ZFS instead of hardware RAID of course. They're attached to a Dell SAS6 HBA.

Then there's a Samsung 840 120GB SSD for the L2ARC. I was previously using a SATA2 PCIe adapter for a 120GB Crucial mSATA SSD which wasn't all that fast but was better than nothing (expansion in a PE2950 was tricky). The Samsung is much better because firstly it's a great SSD, but also because I now have SATA3 ports to use on the motherboard. The plan is to add an SLC SSD for write cache but good drives are expensive and with the UPS I can take the risk without one for a while.

Finally there's the 32GB of ECC memory for ARC. I'm using OmniOS with Napp-it, something I've gotten quite familiar with over the years. The performance is great and perfect for virtualization. I'm currently using 1.6TB with 3.93TB available, so plenty of room to grow into.

The remaining 2 drive slots in the front are used by a pair of 80GB 2.5" drives for the system mirror.
 
I try to use it for as much as possible really. It started out as a way to have high speed storage for video editing (hence the 10GbE links to my workstation), but it's grown from there. Here's a quick run down of the VMs currently running, in no particular order:

  • VCSA 5.5U2 (VMware vCenter appliance)
  • Two domain controllers, Server 2012 R2. Also running DNS of course, and DHCP with failover.
  • Ubiquiti Unifi appliance for wifi APs
  • PBX in a Flash for VoIP (I don't have a telephone line but do have a geographic number via this method instead - handy for outbound 0800 calls that would otherwise be charged to my mobile)
  • pfSense with IDS and OpenVPN - the WAN link from the Virgin modem is in its own VLAN so the VM can move around without dropping the connection. Great for HA.
  • APC PowerChute Network Shutdown appliance to control the shutdown of VMs and hosts when the battery is low
  • TFS 2012 server for my development projects, running on Server 2012
  • 'Remote access server' which is just a Server 2012 R2 VM with SSH access for a few friends to use.
  • Database server with SQL 2012 R2 and MongoDB, used for development and live sites (I know, should be separate...)
  • Windows 8.1 VM for handling *cough* downloads
  • Primary file server with all data stored on the SAN, running Server 2012 and DFS namespace
  • Secondary file server with all data from the primary mirrored onto a pair of 3TB drives in the host. DFS takes care of all of this for me. Server 2012 again.
  • Media server with many RDMs, although fairly modest in total (about 13TB). Server 2012 R2 on that one. Provides content for a Pi running XBMC and makes my music collection available remotely.
  • Pair of Zen Load Balancer VMs for handling web traffic, both from outside and inside the network
  • Pair of web servers running Server 2012 R2. I host internal utilities (e.g. for controlling the power outlets in the rack) and my own website from here and a couple for friends too. Bandwidth is strictly controlled via pfSense using traffic shaping. Connections are only allowed from a pair of HAProxy VMs running on Azure, thus masking my private IP and making it as secure as I can without a DMZ in place (which would be a pain now since the website files are hosted on a DFS share, part of the internal network).

That's it I think. It's still a lab but it does see semi-production use as well, albeit only for my noddy stuff. One reason for the two host/two file server arrangement is so I can put most of my workstations data (e.g. My Documents, etc.) on DFS paths, freeing up the lone SSD I have for more Steam games without being at risk of a dead workstation when a server fails.

I'm a developer by trade but this has been great for learning (especially VMware) and is helping me change tact with my career :)
 
Nice. :D

I'm intending doing something like this as a learning (and tinkering for the sake of it...) exercise, albeit on a smaller scale. Just have the one server at the moment and I don't use it as much as I should.
 
Nice setup!! I was considering building my own rackmount boxes to upgrade me nested lab. But i don't have anywhere to keep it that the wife wont hear so needed something silent.

I can recommend the new super mini machines for a lab.

Im using 3 gigabyte brix with a synology for iscsi storage. You also need a managed switch or a switch that can cope with multiple vlans.

Only 1 nic is the downside but for a lab and testing works great.

2014-09-23%2020.22.13.jpg


craiglab1.jpg
 
Nice one! I bet the power consumption on that is ridiculously low too. If I was starting from scratch now I'd look at Intel Nuc too.
 
I hope you're not using resource pools as some sort of VM organisation.

Can I ask for your thoughts on this?

I use a combination of resource pools and apps to govern resources for specific roles (e.g. file servers, web servers, etc.) and to control start-up sequence. Is there a better way to do this? I was under the impression that resource pools were better than VM-specific allocations and limits.
 
I use resource pools for my critical and non critical boxes. I do have some set to normal.

In that screenshot i have a firewall appliance and my vCenter appliance in a pool set to high with a reservation for my vcenter. My client VM's are all set to low as are my web servers.

I also tested SRM on this lab. You have to have some naming organisation structure when replicating. I use resource pool names rather than folders for this as i use resource pools.

I use vApps for anything that is multi tiered to control startup and shutdown. Usually an app that has a seperate VM for its DB and or worker process.

The only app i have i my lab atm is for vCops.
 
Last edited:
The Brix are 1.9ghz i3, 16gb RAM, 64gb msata ssd . I have a NUC also but I decided to go with the brix for the lab as a guy i work with used them.

I don't licence any of my machines. Its a lab so anything very rarely stays alive for more than the trial time you get with most licences.

quite happy with my make shift rack using an Ikea unit!

2014-09-27%2013.15.31.jpg
 
Back
Top Bottom