Personal VM Server Build

Associate
Joined
16 Feb 2013
Posts
20
Hi Overclockers UK members,

I'd like your advice on a few aspects of a system I am planning to put together if you wouldn't mind offering your support :D
Note I have asked this same question on "Tom's hardware" but I wanted to try and reach out to another set of very knowledgeable users on Overclockers UK - I hope this doesn't come over as cheap :(

Approximate Purchase Date (dd/mm/yyyy): 01/04/2013 - 01/10/2013

Budget Range: £1000 (This is for the initial foundation, I will continually add to it over time)

System Usage: Running multiple VM's (Initially around 5 - 10 eventually 30 - 40 ~ All types of OS's), Production Server (Programming & various services (FTP, Samba, SSH, etc)), Kernel Compilation and possibly some intensive CPU tasks like Video/Music Conversion.

Parts NOT required: Mouse, Keyboard, Monitor, Speakers, OS

Preferred Website(s) for parts: Big names preferably (Crucial, ******, Overclockers, Amazon), however I have seen some parts for cheap on less popular sites so as long as I am protected through paypal or some other service I am happy taking the risk.

Country: UK

Overclocking: Nope (i'm hoping my system will be good enough!)

SLI or Crossfire: Nope (the motherboards I'm looking at don't do either) However I may need to consider a graphics card for VGA pass through eventually!

Parts Preference: I have done a few week's of research and looking around for making a personal home server that is solid for my needs but doesn't require the funding of a small company to purchase! If you know of comparable parts that would be better for my build and are around the same price point please provide a link so I can tweak my build idea.
Below are my initial searches over the few weeks I have had time to look into this:

- Motherboard (required to be at minimum dual socket): C32 Socket - KCMA-D8

- Processor (For the C32 motherboard type - start with just the 1 then eventually buy another down the line): Opteron 8-Core 4386

- RAM (Max the motherboard (above) can take is 128GB (RDIMM) so require 16GBsticks of RAM to fill it over time - I have a few questions about RAM see further down this post): 16GB Crucial DDR3 PC3-10600

- Hard Disk(s) (Again starting with just the one and building over time - I realise lots of VM's on a single HDD will make the system crawl - I have a few small HDD's lying around the house I can use in the mean time): WD Red - 3TB

- Case: (Need something pretty big, I'd like to fill it with at minimum 10 HDDs and possibly a Blu Ray Player - this was only a quick look): Lian Li PC-A76X Full Tower Case

- PSU: I did an online PSU calculator check from here and when i maxed out parts (10 HDD's, blu ray drive, loads of fans) and added things such as a single high performance graphics card (GTX 680) I got around 800W recommended PSU. Would I need to go after a 1000w PSU in such a case to ensure it stays stable even after it ages.


I have a few more questions as well regarding this build, i'll start with the RAM:

General RAM Q's:

- What is the difference between quad ranked and dual ranked memory? - I will have 4 slots of DDR3 RAM for each CPU allowing a total of 64GB per CPU which type will be the most beneficial in this scenario?

- Is it best to go for lower Column latencies (CL) or higher HZ's for the sort of workload I expect the system to do?


Looking at the motherboard it offers an ASUS specific PIKE Slot for hardware RAID - this is something I am very interested in obtaining (again once I have a large amount of HDDs). Now I am thinking of going with RAID 6 with the max amount of HDDs it can take of which in this case is 8. PIKE 2108 .
I am slightly confused by this part however. All the other standard enthusiast RAID cardshave 2 SAS ports capable, each capable of connecting to 4 Sata HDD's but the diagram the PIKE 2108 does NOT show where you would plug such a cable in to.

Looking at the KCMA-D8 motherboard it has a set of 8 Sata ports on the left hand side so i am not sure if you would plug the HDD's into that then the card would interface through the motherboard to those Sata ports and ultimately to the 8 HDD's.
I've never dealt with hardware RAID but have played around with Linux software raid (mdadm) and I know its really easy to add a HDD to an exisiting RAID array. Would it be easy to do the same through a standard RAID card - do you do it normally through BIOS?
 
First of all, what are you using to virtualise, vmware? hyper-v? esxi for example has a 32gb RAM limit with the free version.
 
I;d be tempted to look at a dell poweredge r710 server or similar, i've got one, 24gb ram, 2x quad core xeons, sas drives and i've had 16 vm's running on it no probs via esxi.

What are you actually going to be using it for, if it's production i'd be tempted to look at esxi multiple hosts.
 
Thanks for the replies Uhtred and [Darkend]Viper.

UHtred: I am planning on using either Xen or KVM for my virtualisation platform so there shouldn't be any limits with regards to CPU or RAM.

[Darkend]Viper: I had a quick Google search of the r710 server but found it was well over £1000 unfortunately so I won't be able to afford that i'm afraid. My aim is just to have something to learn from whilst making mini environments - really its for my own interest and understanding.
Out of curiosity what OS's have you been running in your VM's?
 
Well I got my r710 from ebay, cost 476, so they can be had below 1000 but might take some looking, but it depends whether you'd be ok with second hand equipment.

I've run on my all ms os's, ubuntu, debian, suse, red hat. I use mine as mainly a lab machine to create and learn for mcitp. Also use it for proof of cencept, use esxi 5 at work and need to know any changes I do don't break things too badly!! I've also got a couple of webservers running on it, and my media machine serving things from it which are on 24/7
 
A few thoughts:

- Processor is almost never the bottleneck in a virtual environment, so making Dual-CPU a requirement is making your setup much more expensive than it needs to be.

- Maxing out a server on memory is INCREDIBLY expensive. Going for the highest capacity DIMMs usually costs 3 - 4 more than buying the equivalent of the next size down, e.g. for a DL380 G7 (couldn't find prices for G8 RAM) the 32 GB DIMM is £1040, whereas the 16 GB DIMM is £185, so buying the 32 GB DIMM instead of 2 x 16 GB DIMMs ends up costing almost three times more.

- After RAM, disk is the biggest bottleneck. Since this is for home use, I recommend using SSDs, and make sure all your VMs are thin-provisioned (they only consume the space they need; there is no wasted blank space).

The above points are another way of saying that rather than focusing on a single maxed out box, you should design the system so you can add additional servers as you need them.

I have a ProLiant ML110 G7 with a 256 GB Samsung 830 SSD, the whole thing idles at 30 W, has a really powerful Xeon E3-1220, and is nice and quiet (sitting under my desk). After cashback, it cost me £230 for the server, £145 for the SSD, and I already had 8 GB of RAM which is all it has for now. £375 total.

And then I just built a tiny, completely silent system in an Akasa Euler case, Intel DQ77KB motherboard, Xeon E3-1265L V2, 16 GB RAM (the max the motherboard will take) and a 160 GB Intel SSD (that I already had).The parts cost me £538. And to be honest, I got carried away with the Xeon; I could've saved £100 and gone for an i5 which while less powerful would've been more than adequate for the job.

So I've spent £913, have two extremely powerful systems that have a combined total of 416 GB of SSD storage, are quiet (one is completely silent), and consume very little power: the ML110 30W, and the silent one 11W.

Admittedly, I used a couple of parts I already had, and paying £230 for the ML110 was an outrageously good deal (that's about how much the processor costs alone). So if you have to buy something RIGHT NOW, then you may not be in the same position.
 
Thanks for the response rotor,

Processor is almost never the bottleneck in a virtual environment

This is understandable, most VM's will act as clients in the environment I plan to play around with and when i'm not using them directly they are very likely to be idle. Please bare in mind though that I may want to do some computationally expensive tasks as stated in my (verl long I admit...) original post - again I know this sounds odd but it's just me.
I realise the dual socket mobo is expensive but I feel I will future proof myself by investing now.

Maxing out a server on memory is INCREDIBLY expensive. Going for the highest capacity DIMMs usually costs 3 - 4 more than buying the equivalent of the next size down

It certainly is. The price of RAM falls (albeit slower for these larger module sizes) over time and I don't want to be in the situation months down the line where I have filled all the memory banks with 8GB sticks and am now suffering from not having enough RAM (I know 64GB is more than enough but I want to MAX out this puppy!) for the ton of VM's i want to mess with.
Again linking back to my original post I will initially just buy the one stick of 16GB DDR3 RAM which is around £165 (painfully more than twice as much as 2 sticks of 8GB - I get this) and then over time buying another stick and so on.


After RAM, disk is the biggest bottleneck

Of course. I plan to use the tactic similar to that of the RAM where I buy it over time and not in one go (unless I want to take a mortgage out for it...). Once I hit 4 hard disks I plan to obtain some RAID controller be it the proprietary ASUS PIKE card or another RAID controller brand.
I need to ask for recommendations for a RAID 6 capable hardware RAID controller on another thread methinks (again I know these will cost in the vicinity of £300)
Understandably at the beginning with just the 1 HDD i'm going to be grinding slowly if I try and load a fair few VM's, fortunately I have a few small sized HDD's lying around so I can move VM's onto those until I can purchase more of my specific HDD.
As great as SSD's are, I still prefer a larger size of storage as I am considering making it a hub for my family and friends to put their photos and videos as a backup point. (I know another random thing for my server - sorry!)

I would consider 2 servers but I feel I would hit a lower maximum in terms of CPU, RAM and HDD space even when they are both combined (like your set up for example). It's something I want to continue adding to over time, like a little project really :D
 
Thanks for the response rotor

It sounds like you have your heart set on doing it your way. However, I would still like to point out that:

## RAM

RAM does *not* get cheaper over time. As the standards change (DDR4 replaces DDR3, and so on), the manufacturers stop making the older type DIMMs, and so prices go up. Don't believe me? Compare prices between DDR2 and DDR3 for server memory (Registered, ECC):

DDR2 £316 for 8 GB - http://www.crucial.com/uk/store/mpartspecs.aspx?mtbpoid=426C9F52A5CA7304
DDR3 £69 for 8 GB - http://www.crucial.com/uk/store/mpartspecs.aspx?mtbpoid=C412F732A5CA7304

There will ALWAYS be a sweet spot for everything (processor, memory, storage), and that sweet spot is NEVER at the top end, it is always somewhere in the middle. So if filling up a server now with 8 GB DDR3 modules means that 18 months from now you need another box, which you will then fill with 8 GB DDR4 modules (or maybe 16 GB modules are the sweet spot by then), then that's what you do, and it will save you a ton of money, not only because you didn't pay through the roof up front for the most "future proof" box you could afford, but also because in the future, hardware will be faster, cheaper and consume less power, so buy it then, if and when you need it.

Paying £167 for a single 16 GB module right now is crazy. But also your prerogative. =)

## Disk

If you want large amounts of disk space, use SSD for the system disks of the VMs, and present spinning disks as RDM (Raw Device Mapped) devices (or whatever KVM calls their equivalent, assuming they have one).

## Moar power

In terms of one dual-processor machine being more powerful than two single-processor machines... how do you figure?
 
Hey rotor, I hope I haven't come across too aggressively in my stance on the build as this is by no means my intention.
This build is still very much up in the air and I am considering using a combination of SSDs and HDDs after rethinking the size necessary for my VM's whilst still having the storage for things like photos and videos of family and friends.

RAM does *not* get cheaper over time. As the standards change (DDR4 replaces DDR3, and so on), the manufacturers stop making the older type DIMMs

Very true, however I don't think I was very clear on my initial point regarding the price of RAM. I believe that the drop in price will be very marginal (still worth a few beers though!) as I feel that the the price of DDR3 RAM has fallen to a near "sweet spot" before the mainstream introduction of DDR4.
A quick search of "DDR3 DIMM" on http://uk.camelcamelcamel.com/ shows how DDR3 DIMM RAM has (on the whole) dropped in price over the years.

but also because in the future, hardware will be faster, cheaper and consume less power, so buy it then, if and when you need it.

I'm 100% with you that a 1 x 16GB DDR4 DIMM stick will be cheaper, the questions is how much cheaper will it be and how long will it take for DDR4 to become the widely adopted standard? Once it hits that point wouldn't it be worth waiting for DDR5 and then the cycle could just keep going for waiting on the next revision of hardware?


If you want large amounts of disk space, use SSD for the system disks of the VMs, and present spinning disks as RDM (Raw Device Mapped) devices (or whatever KVM calls their equivalent, assuming they have one).

My plan is to have 8 x 3TB HDD's for storage (and initially I thought VM's) and some other disks for my OS. Having 3/4 256GB SSD's would be more than sufficient for my VM's in terms of performance (will be able to run more VM's as a result VS my initial idea of the standard HDDs) and storage so it's definitely something to consider.

In terms of one dual-processor machine being more powerful than two single-processor machines... how do you figure?

I may of made a too sweeping statement that has been interpreted not how I intended - sorry!
My comparison was to your system setup in particular. What I don't want to do is cobble together servers that have some brunt but one having more power than another (not that there is anything wrong with it - each to their own) and having to shuffle VM's between the two to get optimal performance. Dealing with two physical boxes is also just more hassle for me.
I want to make clear (after re-reading my post I made it awfully difficult to interpret) that I realise you can make a setup where the two single-processor machines could house the same combined hardware as the dual processor machine and thus have the same performance. However a separate case and PSU for each one can quickly mount up and cost the same if not more than the as the dual processor set up. Also remember if I want to utilise both processors simultaneously for doing one job, i.e. kernel compilation, I would not be able to do so with two single processor machines, same goes for the RAM if I ever needed to use over 64GB (E.g. lots of VM's running and only have 4GB renaming on both boxes but I need 8GB for the VM - I know it's picky but unless the price is considerably less I don't feel the need to move).
Ignore where I have said HDD space is less when combined, 2 servers would give me far more room for expanding storage - I clearly didn't think through the scenario well enough! Although Hardware RAID for both could be quite an expensive solution VS the single dual processor set up.
 
Last edited:
Back
Top Bottom