ESXi Whitebox VS Bought server, and other QNs

Associate
Joined
9 Nov 2005
Posts
767
Location
places..
Hey Peeps,

I will be soon be stepping up a gear with my Microsoft Qualifications. Armed with a Dream Spark subscription (and possibly Technet soon) I am going to get as close as I can to a self contained production environment. This will be an ESXi based server, running as many VMs as is practical. The budget is £100 (no more please).

So far I managed to spec up a Dell PowerEdge T300 to a reasonable spec. But I cant help thinking that I would get more out of a whitebox, like an SSD or a SAS card (or both! The disk system is going to be thrashed!). Whilst I'm not going to need all the VMs to run Crysis, I would want a reasonable speed with several VMS to run concurrently, (Sever 2k3 & 2k8) a DC (or two) File-server, WSUS server etc. Along with client VMs (XP, vista and 7). Raid isn't a must but I would much rather have redundancy as well as the speed benefits of having the data-store on a Raid (and why not with 500GB HDDs barely £40!).

The major drawback with a whitebox is ESXi compatibility, I don't want to be doing too much with fiddling with th e “oem.tgz” file too much. Although I don't mind doing a bit as I am Linux proficient (enough :P)

The only stipulations for the server are as follows:
A) that it has a Quad core CPU, 2.5GHz or above would be nice,
B) as much ram as possible 12GB would be nice, but perhaps 8GB.
C) It has a decent storage system, i.e. doesn't get bogged down with 5-6+ VMs
D) its less that 1000. (possibly a tad more but only if its really worth it) E) it works easily enough with ESXi 4.0, perhaps a couple of mods but I'M no going to spend days compiling or anything.

So should I go for a whitebox or a Dell or possibly(read preferably) a HP?? How many VMs will I be able to run simultaneously? (at a speed high enough to use easily it via RDP/TS)
Will I really need more than 8GB of RAM? (probably!)
Is it all that much trouble to change around the OEM.TGZ file?

If I can get help form someone who has done this, and/or has greater experience with running ESXi in a production environment, or is just a pro at speccing a computer up for ESXi, it would be greatly appreciated, cookies are available.

Cheers V. much!
 
If you're going for Microsoft Certification, might it not be a good idea to go down the Hyper-V route instead of ESXi? Hardware compatability is much less of a concern that way, and becoming acquainted with Hyper-V will put you in good position to pass exam 70-652.
 
We picked up a HP ML110 (dual core Xeon), slapped 8gb RAM into it along with ESXi 4 and it's currently running 10 VMs without even breaking a sweat, i think total cost was around £350
 
If you're going for Microsoft Certification, might it not be a good idea to go down the Hyper-V route instead of ESXi? Hardware compatability is much less of a concern that way, and becoming acquainted with Hyper-V will put you in good position to pass exam 70-652.

Good point! may consider that.

I was going for ESXi as it is used more, also I'm not going for the 70-652. 70-649 is the only one of the '08 lot that i will be doing, (apart from perhaps 70-680 or 70-260). Personally i would rather get the practice in and be a VCP in the future than do the 652.

Additionally, won't my Dream-spark subscription expire? ESXi is free!

I was more worried about the hardware.
 
We picked up a HP ML110 (dual core Xeon), slapped 8gb RAM into it along with ESXi 4 and it's currently running 10 VMs without even breaking a sweat, i think total cost was around £350

hmm, that has certainly made me reconsider the hardware needs of the box. What speed is the CPU, most of the ML110's at sub £400 are 1.8GHz, i know that ESXi is going to be more fussy about memory. Any RAID or SAS card?

What O/S's are you running, less than 800 MB can't be all that great with a 1.8/2GHz CPU split 5 ways!?
 
I think ours is a 2.5Ghz but cost us £150 - the ML11x's were stupidly cheap for ages and ages before the price shot up.

About half run CentOS 5 and the other half Win XP Pro, ram is allocated between 256mb and 1.5Gb depending on the load and use of the VM. Load is very low and for a test lab you'll never think it's under powered.

CPU is dynamic - so one VM can use all of the CPU (you can also set limits) when it needs it, same with the RAM. So it's not a hard split between them but theres more than enough RAM for us to statically allocate it all.
 
I'm using an asus p5q-vm, 16gb pc5300 & dueal core E8400
VM's are hosted on 2 dedicated SATA hard disk's with 6 additional 1TB SATA providing storage to one of the vm's.

I'm using ESXi4
The only other hardware I needed to buy was a new network card as the onboard NIC was not detected.


Speed wise it runs fine, but I could do with faster disk drives for the vm's, I have three live vm's 2 win2k8 and 1 win2k3. I'm looking into SSD's as the response should be much better, if they work
 
We picked up a HP ML110 (dual core Xeon), slapped 8gb RAM into it along with ESXi 4 and it's currently running 10 VMs without even breaking a sweat, i think total cost was around £350

Sorry, OT but...
What drives? 10 simultaneous??
I love the ML110 but we've got its ugly sister here (115) and it barely runs one instance of SBS08, native, without Outlook whinging about, "Outlook is trying to retrieve data..."
 
Yeah i too was a bit skeptical of that, 10 VMs? Including server VMs, i reckon its possible to be running about 10 copy's of XP there, 800mb of ram would be fine the CPU may be a bit slow, but the HDD would be going mental!

Hence me wanting something with 12GB of mem and possibly an SSD and a RAID config!
 
RAID is less essential with SSD, imo.
You might get away with a bit less RAM too with memory overcommit but if 12 is pretty cheap then go for it :)
 
The question on how many hosts your going to be running inside a ESXI box is pretty much how long is a peice of string as if there doing very little your going to be able to load them up. For your budget, and given its for training/learning purposes i'd definately go with a branded server (which is on the VMWARE HCL), and you cant really go wrong with a ML115, or two of them and still come under your budget. If not then, just read the HCL and find out the most cost effective server up from a ML115; which I believe is the best budget offering for running ESXI. Dont forget that you can still put SSD's into this machine, and also a proper RAID card.

For comparisons (again YMMV), my ML115 is running Win7, WinXP, WinSvr2K8RC2, WinSvr2k3, OpenFiler and OpenNAS at once without bogging down too badly. Now i'm not saying i'm caning each machine, but for playing about, and working through configs and setups it works great. Bear in mind i've got 4 disks in the machine, and the ESXI machines are distributed over those 4 disks, which means I dont get huge amounts of disk contention.

If I was you, and to make your setup a little more "production" like, then i'd get two machines. Put a bunch of VM's on the first like normal. I'd then setup the second machine as a SAN using openfiler, and use this as your datastore. You could then run all your machines off this SAN. Would give you more scope in testing to include basic SAN technology e.g iSCSI etc etc.

(FYI, i've actually got a SAN mounted OS running on an iSCSI datastore, which is hosted inside ESXI itself! - all in the same physical box)
 
Hey Peeps,

I will be soon be stepping up a gear with my Microsoft Qualifications. Armed with a Dream Spark subscription (and possibly Technet soon) I am going to get as close as I can to a self contained production environment. This will be an ESXi based server, running as many VMs as is practical. The budget is £100 (no more please).

So far I managed to spec up a Dell PowerEdge T300 to a reasonable spec. But I cant help thinking that I would get more out of a whitebox, like an SSD or a SAS card (or both! The disk system is going to be thrashed!). Whilst I'm not going to need all the VMs to run Crysis, I would want a reasonable speed with several VMS to run concurrently, (Sever 2k3 & 2k8) a DC (or two) File-server, WSUS server etc. Along with client VMs (XP, vista and 7). Raid isn't a must but I would much rather have redundancy as well as the speed benefits of having the data-store on a Raid (and why not with 500GB HDDs barely £40!).

The major drawback with a whitebox is ESXi compatibility, I don't want to be doing too much with fiddling with th e “oem.tgz” file too much. Although I don't mind doing a bit as I am Linux proficient (enough :P)

The only stipulations for the server are as follows:
A) that it has a Quad core CPU, 2.5GHz or above would be nice,
B) as much ram as possible 12GB would be nice, but perhaps 8GB.
C) It has a decent storage system, i.e. doesn't get bogged down with 5-6+ VMs
D) its less that 1000. (possibly a tad more but only if its really worth it) E) it works easily enough with ESXi 4.0, perhaps a couple of mods but I'M no going to spend days compiling or anything.

So should I go for a whitebox or a Dell or possibly(read preferably) a HP?? How many VMs will I be able to run simultaneously? (at a speed high enough to use easily it via RDP/TS)
Will I really need more than 8GB of RAM? (probably!)
Is it all that much trouble to change around the OEM.TGZ file?

If I can get help form someone who has done this, and/or has greater experience with running ESXi in a production environment, or is just a pro at speccing a computer up for ESXi, it would be greatly appreciated, cookies are available.

Cheers V. much!

OEM.TGZ is a breeze to work with although it is worth noting that the file locations have changed between 3.5 and 4 - watch out for that when reading around the web for solutions. Editing the two files required to get your "unsupported" devices working is alright though, providing there are drivers out there that have been compiled using the correct headers for the device.

In a development environment, CPU power DOES NOT MATTER! Don't get hung up on getting fast CPUs at the expense of other more vital components. RAM and disk I/O are VASTLY more important, as you are unlikely to stress a CPU doing stuff in this kind of environment but you are very likely to cram as many VMs as you can onto it!

Therefore go with RAID1 or, if you can afford, RAID10. SATA is fine in this instance as SAS is going to blow your budget. A good SAS controller is more important than fast disks.

RAM is a case of as much of the fastest RAM as you can afford. If you are buying an off the shelf server it is likely it will want 667 or 800MHz FB-DIMM which is hella pricy (although not as expensive as speccing it up with the server e.g. on the Dell site!!). Newer Nehalem-based Xeon units need DDR3, so unless you are getting a cracking deal, you are wasting your money (both on the RAM and the CPU).

FWIW my "dev box" in the office is a measly dual core Xeon with 2x1Tb drives on a perc6i with 8Gb RAM and its fine for the 6+ VMs I run on it, one is serving up a demo application which is SQL-based, the rest of them are currently replicating an entire client environment for testing purposes.
 
Back
Top Bottom