My finished home lab web-stack for hosting

Associate
Joined
1 Dec 2005
Posts
803
I know I probably lost a few of you at "home" but bear with me, there are reasons for this. Basically I've been paying just over £70/mo for a few years now for a dedicated server to host my websites, which is great but given I've recently started playing around more with servers and virtualisation at home, it makes a certain amount of sense to actually make use of some of this infrastructure.

I'm a software developer by trade but I've always had a hobbyist interest in all things server and network. Playing with these kinds of technologies at home is allowing me to take on more at work and diversify a little. I've long believed that to write good software you have to have a good understanding for the whole platform it's running on and this is all helping.

So now I've finished putting together my little 'web-stack' for hosting, and moved my sites across, I thought I'd put together a little Visio diagram and share it with the good folks here :)

nXij46Q.png


I'm a recent convert to VMWare and virtualisation, having long dismissed it as inferior to real hardware. Some of that opinion is still valid of course, but coming from a software background I completely love the layer of abstraction it provides and the flexibility therein. My arrangement for my home lab is focused on redundancy rather than increased load capacity, which is why I haven't gone for a single standalone SAN/NAS and instead have two VMs acting as file servers using the disks and RAID arrays on each host. The hosts are running much more than you see on this diagram, including a TFS server, vCenter server and two domain controllers.

This little project has also taught me a lot about traffic shaping using pfSense and the advantages of using VLANs for certain elements. My Internet connection is only 5Mbps up which isn't a huge problem at the moment (and less so with traffic shaping), but the next thing I'm going to look into is the feasibility of caching requests on the HAProxy server.

Thanks for reading and thanks for helping with my questions in the past. Any tips and suggestions welcome as usual :)
 
First time using Visio in years, doubt I'll be rushing back to it either :p

A little more info about my random assortment of hardware then.

  • 2x WTI network power switches
  • 1x 10/100/1000 switch
  • 1x 10/100 switch
  • 1x Virgin Media cable modem
  • 1x TP-Link wireless AP (with VLAN for guests)
  • 1x 1u Dell PowerEdge R200 server
  • 1x 2u Dell PowerEdge 2950 server
  • 1x 4u home brew AMD Phenom II server
The servers are not particularly sophisticated and are basically running cheap/free hardware I've been able to get hold of or recycle. Would you use this in a production environment? No. Is it still useful for home? Yes, if you don't mind the higher running costs compared to more modern/appropriate kit.

The R200 (Cetus) I bought specifically to run pfSense initially, but soon realised that it was a) more beneficial to virtualise pfSense and run ESXi on the bare metal, and b) overkill to have three servers! It sits powered off until I need it but can be powered up remotely thanks to the network power switches and DRAC. It's running a single dual-core Xeon 3065 with 8GB of ram and a single 250GB SATA drive. Two NIC ports, one for VM traffic and one for vMkernel.

The 2950 (Hydra) I got hold of through the Lotus 7 Club and is where I keep my 'fast' storage. It's running a pair of dual-core Xeon 5148 CPUs and 16GB of ram with a PERC6/i. Storage is configured as a pair of 1TB drives in RAID1 for the VMs and 4x 2TB in RAID5 for general use. The RAID5 array is split into 2x 3TB virtual disks. Networking uses the onboard NIC ports in a port channel for VM traffic, a separate NIC for vMkernel and a third Mellanox 10GbE dual port NIC for faster access to the storage.

Finally the home brew AMD server (Orion). This started out as a straight recycling of my old workstation, but I've since swapped out the motherboard for a Gygabyte GA990XA-UD3 because of its excellent VMware support. It's running a single quad core Phenom II 965, 32Gb of ram and a SAS 6/iR controller, attached to which I have a pair of 1TB drives in RAID1 for the VMs. I've also attached a number of single drives, mainly 2TB each now, purely for media storage. Further to that, the onboard SATA controller has a pair of 3TB drives attached which directly mirror the two 3TB RAID5 virtual disks from Hydra. Networking is taken care of by a pair of NIC ports in a port channel for VM traffic, a single NIC port for vMkernel and another Mellanox 10GbE adapter.

Here's what all of this actually looks like:

EzAcEfVh.jpg
ItZ3kn1h.jpg


Yes, the loft is not the perfect location for this lot. In the winter it's fine, the sensors on the boards don't drop below 10 centigrade. In the summer things do get interesting and I'm working on a Netduino solution to ask the network power switches to power on a desk fan when it gets particularly hot. It's not exactly A/C but it helps. You'll also spot a few strategically placed blocks of wood - this ain't NASA.

JveVSQ4h.jpg


My home brew AMD box, Orion. This is currently running:

  • Crux - TFS 2012
  • Lupus - databases (SQL Server and MongoDB)
  • Lynx - vCenter Server
  • Mensa - web server
  • Sputnik - file server
  • Venus - domain controller
  • Voyager - Zen Load Balancer
It will also run the pfSense VM if I need to power down Hydra.

SgswYGbh.jpg


On top is the R200 (Cetus) which is spare, and underneath is the 2950 (Hydra) which is currently running:

  • Draco - web server
  • Neptune - domain controller
  • Nova - pfSense
  • Pavo - Zen Load Balancer
  • Pictor - file server
The cabling behind is not so pretty!

T2ES19Ch.jpg


That's basically it in terms of hardware. If you're thinking "OMG how much leccy does that lot use?!", well, I can tell you exactly how much leccy it all uses:

yERVNRyh.jpg


That works out to slightly less than what I was paying for a single, basic, dedicated server - and I had a good price on hosting. I'd already be paying for an Internet connection and I'd already have at least one server running for media. Then I start learning VMware and playing more... basically it makes a lot of sense to do something useful with the kit while I have it. Sure I could 'invest' in more efficient stuff but low up-front costs are quite important for the hobbyist who just wants to try stuff out :)

I'm very much still the hobbyist, but as I said in the first post, learning all this stuff is really helping at work which is great because working with software all the time gets quite boring. It's good to have a bit of variety!
 
Last edited:
Including the other sites it was running, it was doing around 150GB/month. Being a Windows dev the hosting is always more expensive thanks to licensing costs. I'd used a few different, cheaper, VPS solutions before moving to a dedicated server after getting fed up with poor performance.

No argument on the OTT ;)
 
Yes.

My Internet connection is only 5Mbps up which isn't a huge problem at the moment (and less so with traffic shaping), but the next thing I'm going to look into is the feasibility of caching requests on the HAProxy server.
And to make matters worse I'm told the upgrade to 10 or 12Mbps is going to be towards the end of this year :rolleyes:
 
I am hoping that was a typo and you mean RAID1. Because RAID0 gives you absolutely no redundancy for any of the data stored on the array.

RAID0 = Striping = Fast but no redundancy
RAID1 = Mirroring = Redundancy but data is written slightly slower

Otherwise very good build. How have you found migrating a VM from a Host that uses an Intel CPU to a host that uses an AMD CPU - I did have issues with that in the past.

Yes, that was indeed a typo! Each server's pair of 1TB drives is mirrored in RAID1 :)

Well the migration thing isn't too much of a problem but because the hosts aren't even nearly the same it means vMotion is out of the question (in fact I'm not sure I could do that anyway without a separate datastore?). I can create a hot clone to the other host ok, or I can migrate the VM if it's not running. Eventually I'd like to replace the 2950 for a newer AMD box to gain a few more features. Not sure if you can do FT without a separate datastore either but that would be nice to have, especially for pfSense.

As a side node, it annoys me that I can't have a pfSense active-passive failover arrangement without multiple static WAN IP addresses. All I want is for the two systems to keep their config and data in sync and for the passive to up its important interfaces when the active is down. Zen LB seems to manage this very well, so why not pfSense? :(
 
how do you find the Dell PowerEdge 2950 server? I was thinking of picking one up for my own home lab as there quite reasonable on Ebay and my current N40L isn't cutting it anymore.

It has pros and cons. On the upside it's designed for this purpose - redundant PSUs, HDD caddies and backplane, redundant cooling system, takes full height expansion cards, lots of RAM slots, dual CPUs, etc. But the price you pay is noise and vibration, limited upgrade options (CPUs in particular) and high power consumption.

Take all of these factors into consideration carefully, as it may work out better for you to build your own system that runs quieter and more efficiently, at a slightly higher initial cost.

Shad
My loft space gets very hot in the summer, your kit may start to roast on a hot day!
Oh believe me it got pretty hot in there at the end of last year during 25 degree outside temperatures! No problems so far, the 2950 ran up towards 35 degrees. Again, not ideal and I have to keep an eye on it. I use the desk fan to blow air across the power switches and network switches as they don't have any other active cooling.
 
Are you happy that it's costing you about £45 per month just in electricity? I suspect you could improve that if you set some of the equipment to power down over night for example.
That is assuming the 539 watts is constant and that you pay 12 pence per KWh of course.

As for the location. Not ideal. Lofts are cold in the winter and very hot in the summer - at the risk of stating the obvious. :)

Winter is not so bad. It would be rare for a loft to get too cold and cause condensation issues since the equipment would also provide some heat. The problem with some lofts - like mine - is you get condensation build up on the inside lining of the roof which can drip down during extreme cold. This is often in lofts with little or poor ventilation and your loft shouldn't do this. Just thought I would point it out to check.

Summer. On a warmish day in the summer I can not be in my loft for more than a few minutes as it becomes unbearable. Well over 40 degrees C. Your equipment simply will get too hot up there in summer even with noisy server fans I suspect. You could look at venting from the eaves up to your servers then out the other side of the loft. You could build some kind of mini cardboard tunnel/vent going over your servers.

Have you got a garage you could run cat5 out to? Garages are much better places for servers although can be more dusty.

I'm not happy about the cost, but I'm not particularly unphappy either. It's just another hobby that I budget for, much like running my cars, cycling, entertainment... like anything really.

The heat is an issue I have to keep a close eye on though. Winter so far has been fine, no condensation issues. Summer last year was interesting. We had external temperatures of over 25 degrees and in the loft it was very hot indeed - unbearable as you say. The temps in the servers didn't go above around 35 degrees, which although is very hot, didn't seem to be a problem. A little bit of air movement as well as keeping the loft hatch open during the day seemed to help quite a bit.

So yes, it arguably will get too hot, but so far it hasn't been a problem. The garage isn't an option currently but I'll be moving soon so I can plan all of this a bit better :)
 
Last edited:
30-40kg? Mine weighs maybe 20kg at the most. As for loft taking the weight...surely with boards down...they can take a humans weight so... ?
I need about 1400VA to make sure everything can run under load for enough time to survive a brief outage as well as shutdown safely, so I'm looking at a 3u APC unit which is about 31Kg. It would probably be fine as you say, quite a bit less than a human. But it still makes me nervous :p

Any reason why you're going for HaProxy -> pfSense -> Zen Load Balancer ?

I would have pfsense first and then your load balancer, it seems pointless to have load balancer, firewall, load balancer?
The HAProxy server is external to my network and is mainly there so that I can serve a useful message should my network be unreachable or unable to serve pages (there have been situations in the recent past where Virgin Media have left me without a connection for anything from a few minutes, to several hours or a couple of days - it's rare but it does happen). It also has the happy side effect of hiding my IP address, as well as giving me somewhere to host static content :)
 
Thanks - I think rather than upgrading to an R710 I'll probably duplicate my AMD Phenom based server and convert my 2950 to a fibre channel target. Should be able to play with FT and vmotion then.

Good shout on the monitoring though, already have that setup in vSphere (although I don't seem to receive emails from it), and have OpenManage on one of the VMs to get a bit more detail for things like the storage arrays.
 
Back
Top Bottom