New Build - Hyper-V Server

Associate
Joined
13 Jun 2008
Posts
105
Hi All,

First post in a while and I'm looking for advice from those in a similar situation. I used to build a lot of my own PCs but haven't done it for a while so all input would be greatly appreciated.

Summary

I'm looking to build a new PC which will run Windows Server 2008 R2 SP1 as a Hyper-V host, upon which there will be lots of VMs. Just moving into a consultancy IT role and my current PC just doesn't have the juice to be able to handle running 5+ VMs simultaneously and all of the extra stuff I'm going to use. Please note - I'm looking to build a PC to run this lab, and am not convinced that purchasing a server is the right way to go. Happy to be talked around though!

My plan is to get a capable specced PC with a lot of RAM that can handle a lot of what I throw at it. Disk separation will be a smallish SSD for OS and a couple of striped RAID disks for VHD storage all on SATA 3 for increaed speed. Quad core CPU, possibly a cheap and cheerful graphics card so I don't have to use on chip.

Budget

Around £1,000. Any less is great, but this is the top limit.


Current thinking is as follows.... (In advance - any advice given is greatly appreciated!!) :)



CPU

Intel Core i5-2500K 3.30GHz (Sandybridge) Socket LGA1155 Processor - OEM

Link

Comments

I was thinking about the i7 2600k, but can't really justify the difference in cost for the increased threads. Overclocking potential on this chip is fantastic as well...



RAM

2 x Corsair Vengeance 16GB (2x8GB) DDR3 PC3-12800C10 1600MHz Dual Channel Kit

Link

Comments

Might start off with one of these to start and then purchase an additional 16gb further down the road. Those VMs are hungry for RAM!



Motherboard

Asrock Z68 Extreme4 Gen3 Intel Z68 (Socket 1155) DDR3 Motherboard

Link

Comments

This was a bit of a difficult decision. Ideally I wanted a board which would support 32gb of decent speed RAM with SATA 3 functionality. Apparantly there are only two decent ports on this board (the intel ones). USB 3.0 and GEN3 are nice.



OS Hard Drive

OCZ Agility 3 60GB 2.5" SATA-3 Solid State Hard Drive

Link

Comments

Might bump this up to a striped pair of 60gbs, or a singular 120gb. Only OS will be installed on this, perhaps some smaller, more frequently used VMs too.



Data Hard Drive

2 x Seagate Barracuda 7200RPM 1TB SATA 6Gb/s 64MB Cache - OEM

Link

Comments

Striped and connected via SATA 3 for increased speed. Not bothered about mirroring as will be making backups frequently.



CPU Cooler

Corsair A50 High-Performance CPU Cooler

Link

Comments

Good, cheap, efficient cooler.



Graphics Card

ATI Radeon HD 5450 SILENT 512MB GDDR3 PCI-Express Graphics Card

Link

Comments

Silent, does the job. I have no graphical requirement bar RemoteFX so this should suit fine.



Case

Antec 300 Three Hundred Ultimate Gaming Case - Black

Link

Comments

No idea on case. I recently purchased one of these for a colleauge and thought it was very impressive. Open to suggestions!



PSU

OCZ ZS Series 550W '80 Plus Bronze' Power Supply

Link

Comments

No preference for PSU - cheap and cheerful. Not running a lot hardware wise, so lower power is fine.



Rough cost for the above = £900



All input greatfully received!

Thanks in advance,

N31L777
 
Soldato
Joined
6 Sep 2008
Posts
3,974
Location
By the sea, West Sussex
I recently upgraded my test HyperV box to an i7 960, 24GB RAM and used the existing pair of 30GB SSDs (RAID1) for the OS and RAID0 stripe of 4x 80GB Velociraptors for the VM's VHDs.

Currently running a FreeBSD based router (Pfsense), 3x 2003 Server, 5x 2008 R2 core servers, 2x 2008 Full servers and test Windows 7 client.

FreeBSD and the Core machines have 1GB RAM each, the Full and 2003 server have 2GB each and the client has 4Gb. Granted none of them are taxed very hard but this machine barely breaks a sweat.
 
Associate
OP
Joined
13 Jun 2008
Posts
105
Would an 8 core AMD bulldozer chip be a better match for a system such as this?

Perhaps. I have looked at both Intel and AMD offerings. The 4 extra cores would be nice for mapping to the underlying virtual machines a lá 1:1 ratio but they do run slower.

Hyper-Threading isn't really a deal-breaker as Hyper-V doesn't particularly utilize it, so from that perspective I'm open to discussion.
 
Associate
OP
Joined
13 Jun 2008
Posts
105
I recently upgraded my test HyperV box to an i7 960, 24GB RAM and used the existing pair of 30GB SSDs (RAID1) for the OS and RAID0 stripe of 4x 80GB Velociraptors for the VM's VHDs.

Currently running a FreeBSD based router (Pfsense), 3x 2003 Server, 5x 2008 R2 core servers, 2x 2008 Full servers and test Windows 7 client.

FreeBSD and the Core machines have 1GB RAM each, the Full and 2003 server have 2GB each and the client has 4Gb. Granted none of them are taxed very hard but this machine barely breaks a sweat.

Thanks for the info Pete. Would you mind giving me the specs of your hyper-v box?
 
Soldato
Joined
9 Oct 2008
Posts
2,993
Location
London, England
The specification you listed doesn't have any flaws as far as I'm concerned; it seems like a reasonable and logical choice of components given your budget. From my (admittedly somewhat limited) experience, disk speed has the largest effect on VM performance. I noticed almost no improvement in the performance of my virtual machines when I upgraded (from a QX6700 & 16GB DDR2 RAM to an i7-3960x with 32GB DDR3 RAM ) due to my disk subsystem being crap. I had just a single 7200RPM drive with all of my VMs on it, and it was a massive bottleneck. Going to an SSD (specifically a Crucial M4) had a dramatic effect on the performance of my VMs.

Of course, if you've got 600gb worth of VMs to run then I expect SSDs would be cost prohibitive. If you have less than 250gb then an SSD may be feasible.
 
Associate
OP
Joined
13 Jun 2008
Posts
105
The specification you listed doesn't have any flaws as far as I'm concerned; it seems like a reasonable and logical choice of components given your budget. From my (admittedly somewhat limited) experience, disk speed has the largest effect on VM performance. I noticed almost no improvement in the performance of my virtual machines when I upgraded (from a QX6700 & 16GB DDR2 RAM to an i7-3960x with 32GB DDR3 RAM ) due to my disk subsystem being crap. I had just a single 7200RPM drive with all of my VMs on it, and it was a massive bottleneck. Going to an SSD (specifically a Crucial M4) had a dramatic effect on the performance of my VMs.

Of course, if you've got 600gb worth of VMs to run then I expect SSDs would be cost prohibitive. If you have less than 250gb then an SSD may be feasible.

After a bit more reading I'm swaying towards a slightly altered build.

Namely:-

  • Scrap the i5 2500k for an AMD Bulldozer FX-8 - Reason being more physical cores. This pc is only going to be used for lab work, so the increased single thread performance of the i5, plus the inability to utilize hyper-threading isn't important.

  • Switch the intel board for an AMD one. - More SATA-III ports + cheaper cost = win.

  • Only get 16gb of RAM. - I can always purchase another kit later.

  • Dump the large 1TB disks and the smaller SSD for the OS and just get a slightly larger SSD (240/256gb). VMs should still all fit as long as I'm careful. Perhaps use linked clones for some VMs.
Thoughts please guys?

Thanks in advance.

N31L777
 
Associate
Joined
28 Oct 2002
Posts
1,819
Location
SE London
Also, scrap the graphics card, find one with onboard, it doesn't have to be a 9xx chipset to support FX processors, get a 8xx board with onboard graphics, you're not going to be doing anything on it apart from looking at a console screen.

Smaller, faster disks in an array, win2k8 r2 install is quite large, then when you add ontop of that app installs, dummy data, etc

If you think about it if you want to do an exchange lab setup, you'd need, 1 DC, 2x exchange, 2 CAS, 1 7 client, add into your dummy domain, say, SQL, SCCM, SCOM, that's all of are taking up 20Gb+ off the bat before data.
 
Soldato
Joined
9 Oct 2008
Posts
2,993
Location
London, England
  • Scrap the i5 2500k for an AMD Bulldozer FX-8 - Reason being more physical cores. This pc is only going to be used for lab work, so the increased single thread performance of the i5, plus the inability to utilize hyper-threading isn't important.

  • Switch the intel board for an AMD one. - More SATA-III ports + cheaper cost = win.

  • Only get 16gb of RAM. - I can always purchase another kit later.

  • Dump the large 1TB disks and the smaller SSD for the OS and just get a slightly larger SSD (240/256gb). VMs should still all fit as long as I'm careful. Perhaps use linked clones for some VMs.
What sort of workload are you going to be running in your virtual environment? Are you setting this up purely for testing things out, or will some/all of this be for production purposes? The changes you've suggested seem reasonable, but it does depend on what you'll be doing. If you're going to be testing Exchange 2010, for example, you may find 16GB a little restrictive depending on what you've got running. I guess I'm trying not to come across as patronising, as you seem to know pretty much what you're doing anyway :)

If RemoteFX is a requirement then I don't think you can go for onboard graphics; I can't find any specific information on this though.
 
Associate
Joined
4 Aug 2008
Posts
1,778
Location
Waterlooville
for 900 surely you could buy an actual server, probs second hand, but a dual socket workstation should be feasible.

I dont mean to come across wrong but why are you guys raiding up OS drives for hyper-v and why are you using SSD's if you are going to use 2x sata 6gbps drives for data why not load the OS on them two the hyper-v host does not require a disk as capable as an SSD, its just a waste and certainly does not require raid ssds

I also doubt that you will be loading it enough to require SSD raid volumes for fast client OS drives just put more into a disk array if you had 4 1tbs drives in raid 10 you would see substantial speed and redundancy, added to the ICH10r chip being capable of over 700mbs running 3 disks this is clearly more than could be required.

Also OP why are you looking at K series chips are you really going to over clock your virtual environment? why not just put it a standard i5 chip in.

Having managed Hyper-V production environments I would be surprised if you can generate enough load to warrant the extra CPU cost?

Just my 2p worth
 
Soldato
Joined
9 Oct 2008
Posts
2,993
Location
London, England
for 900 surely you could buy an actual server, probs second hand, but a dual socket workstation should be feasible.

Also OP why are you looking at K series chips are you really going to over clock your virtual environment? why not just put it a standard i5 chip in.

Having managed Hyper-V production environments I would be surprised if you can generate enough load to warrant the extra CPU cost?
I'm afraid I don't follow your logic. You suggest that the OP get proper server level kit with dual processors etc and all of the noise and power issues that go with it, then you go on to state that he won't be generating enough load for a single i5 processor. Surely you can't mean both?

I dont mean to come across wrong but why are you guys raiding up OS drives for hyper-v and why are you using SSD's if you are going to use 2x sata 6gbps drives for data why not load the OS on them two the hyper-v host does not require a disk as capable as an SSD, its just a waste and certainly does not require raid ssds
You use the word "require" as though you absolutely must be pushing a certain number of IOPS before you can go for an SSD. Of course I don't "require" an SSD for running a handful of virtual machines in a test environment. Of course I could get away with using a 5200RPM laptop drive, it would work after all. I chose to use an SSD over a mechanical drive as I felt it was a worthwhile tradeoff - I value my time above the cost of a relatively cheap component. The less time I have to spend sitting around waiting for processes to finish, the more time I can spend testing or learning about a product.

Your points are valid for a production environment, but for a test lab environment I'm afraid I disagree. The usage patterns and goals are entirely different.
 
Associate
Joined
4 Aug 2008
Posts
1,778
Location
Waterlooville
Saundie, by the £900 statement i was referring to cost vs test environment referring to what you could use which would potentially be more geared for the work load and have additional useful features.

And as for the CPU I was arguing the point of an i7 K-series over-clocking chip vs a compatible i5 which is reasonably cheaper.

Granted I understand that you time is worth more and hence your justification for SSD's my argument being just by 4 HDDs rather than2 HDDs and 2 SSDs since the OP was debating whether to use more or less SSD's I figured it was worth pointing out that SSD in this instance might not see great gains.
 
Soldato
Joined
9 Oct 2008
Posts
2,993
Location
London, England
Okay, I see what you were trying to say about the processor - please don't take this as an insult, but what you wrote was very difficult to try to decipher, and obviously I misunderstood the point you were trying to get across.
 
Associate
OP
Joined
13 Jun 2008
Posts
105
What sort of workload are you going to be running in your virtual environment? Are you setting this up purely for testing things out, or will some/all of this be for production purposes? The changes you've suggested seem reasonable, but it does depend on what you'll be doing. If you're going to be testing Exchange 2010, for example, you may find 16GB a little restrictive depending on what you've got running. I guess I'm trying not to come across as patronising, as you seem to know pretty much what you're doing anyway :)

If RemoteFX is a requirement then I don't think you can go for onboard graphics; I can't find any specific information on this though.

The VM lab work will differ depending on what I'm testing. Initial workload will be geared around SCCM 2012 RC so at very least 1x DC, 1x Win7Client and maybe 3x 2008R2 VMs. But this could increase. The lab will function as a test-bed to play around and get used to new technologies.

Ideally I'll go with the 32GB of RAM as I know this will be key. Cost is a bit of a pain though as you creep up the size of the individual DIMMs (e.g. 4x4GB = ~£70, whereas 2x8GB is more like >£120).

As for RemoteFX, a cheap and cheerful card will do the job, so I'll stick with dedicated.

for 900 surely you could buy an actual server, probs second hand, but a dual socket workstation should be feasible.

I dont mean to come across wrong but why are you guys raiding up OS drives for hyper-v and why are you using SSD's if you are going to use 2x sata 6gbps drives for data why not load the OS on them two the hyper-v host does not require a disk as capable as an SSD, its just a waste and certainly does not require raid ssds

I also doubt that you will be loading it enough to require SSD raid volumes for fast client OS drives just put more into a disk array if you had 4 1tbs drives in raid 10 you would see substantial speed and redundancy, added to the ICH10r chip being capable of over 700mbs running 3 disks this is clearly more than could be required.

Also OP why are you looking at K series chips are you really going to over clock your virtual environment? why not just put it a standard i5 chip in.

Having managed Hyper-V production environments I would be surprised if you can generate enough load to warrant the extra CPU cost?

Just my 2p worth

Not really interested in getting a second-hand server. A decent spec desktop PC built for purpose is the ultimate aim. I just don't need some of the features that would go with that approach (dual PSUs, SAS disks etc). It is only going to be a lab, not production.

No mention of RAIDing SSDs, that would be overkill. I currently have a second PC which I use for a lot of my lab work, but it's slow. Only a dual-core CPU, 8GB of slower RAM and 7200 disks. I want this to be a nippy dedicated lab machine...

Here's my logic for the disk proposal, and changing from a small SSD for the OS & two larger, slower striped discs for storage:-


If I go RAID0 or RAID10 with some fast SATA-3 1TBs discs then that will cost me between £200-£400 on storage alone (assuming £90-£100 per 1TB disc).
I may as well bite the bullet, get a larger SSD to stick everything on (costing me ~£250 for ~250GB) and reap the rewards of increased speed.

I've gone for the 250GB size as 120GB might be pushing it space-wise.

Again, I'm not that fussed about redundancy; I'll back up separately. I just want the lab to be quick and efficient.

I appreciate the comments and discussions thus far, it's all helpful!

Thanks,

N31L777
 
Last edited:
Associate
Joined
31 May 2005
Posts
2,059
Location
Alfreton,Derbyshire
Will be following this thread as I'm looking to build something similar with Hyper-V. Have you made a decision around the CPU? I will also be using the PC for other stuff so will probably end up dual booting 2008R2 and windows 7 possibly.
 
Associate
OP
Joined
13 Jun 2008
Posts
105
Will be following this thread as I'm looking to build something similar with Hyper-V. Have you made a decision around the CPU? I will also be using the PC for other stuff so will probably end up dual booting 2008R2 and windows 7 possibly.

Apologies for the late reply - I've been very busy at work.

Not entirely sure but I think I'm going down the AMD route. If I was to want a machine with better single threaded performance then I'd grab the intel, but I don't.

The box is purely going to be a test bed for VMs, and although I haven't seen a lot of reviews that specifically focus on this with the bulldozers, I know that more physical cores = better for virtualization.

Pay-day Tuesday, so it'll all be ordered then. Thankfully OCUK sale has some items which have been reduced! woop :)
 
Associate
OP
Joined
13 Jun 2008
Posts
105
Hmm. Just looking at motherboards and the the change in chip (from Intel i5 2500K -> AMD FX-8120/8150) warranted an obvious change in motherboard.

Looking at the specifications on the Asus website, it says it only supports 4GB DIMMS.

Can any shed some light on this please? As I'm looking to purchase 2x Corsair Vengeance 8GB (1x8GB) DDR3 PC3-12800C10 1600MHz Single Channel Module (CMZ8GX3M1A1600C10) [CMZ8GX3M1A1600C10]

Link

It helps that it's on sale too :p

Any advice greatly appreciated. :)
 
Associate
Joined
7 Feb 2011
Posts
300
Location
London
Subscribed to this thread, you got me interested, I am looking to build a home server/lab box but I am not sure whether I should re-use my old components (based around LGA755 socket) or build a new one. The limit with 775 is the RAM, as I would only have 6GB available and the cost of DDR2 is high compared to DDR3
 
Soldato
Joined
6 Sep 2008
Posts
3,974
Location
By the sea, West Sussex
Subscribed to this thread, you got me interested, I am looking to build a home server/lab box but I am not sure whether I should re-use my old components (based around LGA755 socket) or build a new one. The limit with 775 is the RAM, as I would only have 6GB available and the cost of DDR2 is high compared to DDR3

This is EXACTLY why I upgraded. I could have happily stayed with the Q8200 is was the 8GB of RAM that was killing me, and 16GB of DDR2 is rare and pricey.
 
Back
Top Bottom