Virtualization: Best Windows based (Virtualbox, VMWare, etc).

Associate
Joined
10 Nov 2004
Posts
2,237
Location
Expat in Singapore
Hi,

I am looking to turn my NAS in to a virtulization server.

Seems like I need to have the base OS as Windows something (7 HP/Pro) as there is a bug in the current Linux Kernels that affects my hardware and kills the irq for my network card (Bug). Shame as I have a it more experience of running OpenVM / Proxmox / KVM.

I am planning to run virtulized;
WHS2011 (NAS).
DHCP (possibly in the WHS2011 virtulization or as a standalone 'appliance').
WHS2011 (WSUS - Windows update server to get all the updates and make them available for my 3 PCs).
Linux (CentOS probably - Minecraft Server, or two or three).

WHS NAS and WSUS separate as WSUS adds lots of network shares I do not seem to be able to hide from the media players. DHCP can go on either of the WHS2011 installs or as a 'appliance'. Minecraft on Linux which will be the main resource hog I would imagine.

One reason for doing this is that WHS2011 is limited to using only 8GB of ram and I have 12GB installed and can go up to 16GB quite easily.

So;
1. what is the best Windows based virtulization software which is free and likely to perform well ?
2. What is the best Win7 version to install it on (not going to stretch to WS2008r2 for a home setup).

Many thanks
RB
 
Version of Win 7? I think you'd have to go 64 bit to overcome the 32 bit memory limits.
http://msdn.microsoft.com/en-us/library/aa366778(v=vs.85).aspx#physical_memory_limits_windows_7

Yeah, sorry, it was more a question of whether any Win7 version would be better than any other for this application (running a VM suite) or if I may as well stick to Win7 Home Premium. Would like WS2008r2 but seriously, 800+ USD for a home install is well over the top for what I want. It is a real shame that WHS is limited to the 8GB or that would have been ok. Of course if my Linux Hardware issue gets sorted then CentOS etc would be much more resource friendly.

Thanks
RB
 
...VMware all the way, no question. I think people use virtualbox in the same way that some people use weird and crappy versions of Linux instead of Windows. They like to be "cool" and different to the norm.

They perhaps might be autistic also, possibly in conjunction with being socially phobic.

Thanks you DR Pain, that brought a smile to my face :D

Most VMware users I have met however, have been quite normal.

:)

Of course I would presume you are using yourself as a benchmark for normal ;).

RB
 
To me, it sounds like you should use either

http://www.vmware.com/products/server/overview.html

or

http://www.microsoft.com/hyper-v-server/en/us/default.aspx

Both are free, if feeling brave then go ESXi but without virtual centers and other supporting kit its not brilliant for Labs but is the best pure hypervisor.

Fantastic, either of those look like exactly what I need. Do you know of any comparisons between the two (yes I am also going to google :)) ?. Bit concerned that on the VM comparison link it mentions VM server was announced EOL in Jan 2010...

Would like to know which has the biggest footprint on the server between Vm server and MS Hyper-V. I think I recall playing with ESXi when I was looking at virtulization a year or so back and just didn't get my head round it but then I didn't spend too much time on it.

Cheers
RB
 
Install a copy of a VMware ESXi hypervisor and run all your VM's from there. No requirement for a host OS at all which you seem to be asking with your Win7 question.

ESXi 4.1 is free for use without virtual centre or any of the advanced features and has the smallest footprint. I have it installed on a memory stick which the box boots from allowing access to all internal disk space as datastores. ESX Server has been announced as EOL as they are moving to a single model with ESXi.

VMware 5 is due soon but 4.1 is more than good enough for a home install.

Yep, def looking for a virtulization OS type arrangement if possible. What is ESXi running off of under the skin. I read that ESX is(/was) using RHEL (Red Hat). If this is the case then the Linux kernel issue will most likely prevent me from using it. I know Win Server 2008r2 doesn't have a problem.

RB
 
ESXi is a bare metal hypervisor and it does not have the Redhat based service console that ESX has. Fairly limited console access but perfect for a home lab set up.

Once you install it, admin and configuration is performed via web browser or the client software.

Given the discontinuation of ESX, people will have to migrate to ESXi moving forward anyway.

Sure but as I understand it, just like MS Hyper-V is build on top of WS2008r@ but has not access to the underlying OS directly, surely ESXi is sitting on top of something else that you also cannot directly access. Like a lot of stuff these days where companies take a base Linux model and build and appliance on top and then removes the ability to access the OS environment beneath. The only other alternative is that VMWare have built their own kernel and drives for the vast array of hardware out there they envision wanting to have ESXi run on.

RB
 
Ok, the Wikipedia entry states they have developed their own kernel.

I will now go and take a look for the compatibility list as my machine is not server hardware, for the most part, and so I need to check there is a possibility it will work with what I have.

RB
 
Something else to consider maybe is that cheap HP Server runs ESXi if you have the pennies....

:D.

This server had taken me 6 months and far more money than it should have to get to this stage. Server has 20 external drive bays and another 3 internal. Getting two 8 port Sata cards to work on a single MB on a budget and finding out that my 5 WD Green drives really don't like being in a raid 5 array, to the point of having to RMA all 5 for excessive bad sectors), it is almost like a masochists dream now. I can't give up now or all the previous pain would have been for nothing ;).

TBH the HP server came out a month or so after I started with this and I probably would have got my folks to get one or two and ship them over to me after getting the rebate.

RB
 
I have just installed ESXi and have spent the last 30 minutes trying to find out how to remotely administer it until I finally pointed the web browser there and found the link for the remote management client :(.

Looks very Linux like :D.

Just waiting for the client to finish downloading.

RB
 
Well I got the client and installed it. Created a VM for WHS2011 (2Ghz cpu, 180GB disk and 2GB ram which is probably what my WHS install to this box was using day to day) and it took 1 hour to get to 31% of expanding files in the install process :(. I cancelled and upped the cpu to 6Ghz and the ram to 8GB and it installed in 30 minutes. I have since throttled them back dow again and it seems to be running fine.

I also installed CentOS 6 minimal (200+MB download), installed wget, Java and Minecraft and the minecraft VM is running very happily although I have to sort out the iptables rules to let it through the firewall (could connect with iptables stopped but not when running).

This took me up to 1am this morning (up at 6:30 for work so not feeling so clever at the moment :)).

Now comes the kicker in the sensitive bits....

The WHS2011 is acting as a NAS with around 11TB of data (5.5TB live and the same as backup) sitting on the host ready to be used by the WHS VM. I just need to just make these drives available to the VM and I can see them in ESXi. Well it seems to be able to pass through the the VM you need to support VT-d (direct IO to Virtulization).

That should be fine as my new shiny i5 2400 supports the VT command sets which is why I upgraded from the i3. Nope... it seems I also need a motherboard that supports it and my Asus P8H67-V doesn't :mad:.

This is the third MB for this project, first was an old LGA775 board that could not support two 8 channel HBAs for connecting 16 drives. The second, an intel LGA775 server board, could support the HBAs and a PCI graphics card just not the PCI graphics card I bought as there was a little known hardware clash with that board and my graphics card chipset. This 3rd board was doing everything I wanted until now.

Thing is, I don't recall Virtual Box requiring anything in order to share my hosts drives directly to the virtual machine when used on my C2Q 9450 workstation. Is there seriously no way around this ?.

Apart from that I am pretty happy with ESXi (which doesn't use any sort of Linux kernel but a special VM kernel that looks remarkably like a custom compiled Linux kernel :)). After I have sorted out the drive issue I can test network speed to make sure ESXi is not affected by the Linux kernel bug that causes my NIC IRQ to get disabled.

Thanks for all the pointers so far, has made it so much easier although finding the VI Client was really not so intuitive and not mentioned in the ESXi management guide I was reading (could have missed it or maybe it was outdated or the like).

RB
 
Doing well so far :)



Did you install ESXi to the hard disk or are you booting off USB?
What format are those partitions the data is stored on - NTFS / FAT32?

ESXi was installed to my WD Blue 320GB hard drive. I did have a memory stick connected but it did not come up as a choice and I didn't google how to do it as I wanted to get a 'proof of concept' going first.

The disks are a mix of NTFS, NTFS Dynamic (spanned and stripped, one set of each). Installing and using the drives directly connected to the MB is ok but the data drives are connected to my LSI 1068e based controllers which ESXi can see when installed and list the drives but I cannot pass control or share them to the VMs unless I create a new datapool / datadisk which means wiping the disk and creating a virtual disk file on the disk.

RB
 
I bought a replacement Asrock Z68 Extreme 4 today which supports VT-d and has the 3 PCIe slots (two needed for my LSI controllers) but after putting it all together and installing ESXi on a USB stick, it will not recognise my LSI controllers on the first or second PCIe x16 slots :mad:. I can only get one controller working at a time.

The hardware pass through seems to work though :(.

RB
 
Ok, found the issue, the HBA cards are only PCIe 1.0a and the ports are PCIe 2.0.

Looks like I will be going for a couple of Adaptec 2805s unless there are any other suggestions.

I have an Adaptec 1405 (PCIe x4 4 port hba) and a 2410SA (PCI-X) so I can cobble something together for now.

One issue I do have though is that I cannot use the hard drive I originally installed ESXi to as when I plug it in I get a Pink screen of death. Is there no way in ESXi for me to clear the disk so I can reuse it without having to put it in another machine for formatting.

Cheers
RB
 
Well it seems the Adaptec 1405 doesn't play nice with ESXi.

After doing a passthrough to WHS, I installed the 2008r2 drivers for the card as it was seen as an unknown SAS controller and ESXi pink screened again. Since then, every time I start the WHS VM, after it gets to the login screen it locks up the whole of ESXi. Shame as it was looking quite good.

Now moving on to MS Hyper-V to give that a try resulted in finding out you can only remotely manage it from either Win7 Pro (and above) or Win Server. As I have Win 7 Home Premium that is also a waste of time.

Now trying ESXi one more time to make sure it was not a silly error on my part and have reinstalled the WHS2011 VM.

If that fails then I will have a look again at Xen, Proxmox and then OpenVM/KVM from a base CentOS install if I need too as I have hopefully removed the Linux hardware issue now I have changed the MB.

RB
 
Last edited:
Thanks Bods.

I'll put a quick summary for those not wanting to go through the wall of text above and try to keep it short.

Mission - Part 1 - Home Nas to replace my DLink DNS-323 (Great box but a little slow on network speeds).

Extra ram for an old PC (4GB total), 3 New WD green 1.5TB drives, 2 second hand WD Green 1.5TB drives, Seagate 1TB 7.2k boot drive, 2xIntel NICs (PCI), CentOS 5 as the OS.
CentOS 5 cannot handle advanced format drives which the WD are -> Upgrade to Fedora 14 which can.

WD drive configured as a Raid 5 array -> drive start hanging during movie playback. One drive after another RMA'd due to bad sectors. All 5 drives RMA'd within 3 months.

Replaced drives with 4x1TB WD Blacks (live raid), 2x2TB WD Greens (spanned for backup storage) -> all good.

Replace PC box with Norco 4020 20 bay rack case (4u). Managed to get 2xLSI 1068e based SATA/SAS cards cheaply (8 drives each) and the rest can connect to the MB sata ports -> Buy an Intel MB (socket 775) with two PCI slots for the storage cards and a PCI card for video -> Video card chipset has a clash with that MB.

Replace MB with an Asus P8H67-V and an i3 (for integrated graphics, lower power usage) -> One of the NICs damaged by me (socket broken due to ham fisted removal of stuck network plug) -> Linux Kernel disables my surviving network cards irq a few minutes after it is used resulting in 2MB/s max network speed (Gbit network).

Test Windows SBS 2011 as there were a lot of features I liked the look of -> ran dog slow on 4GB so upgraded to 12GB ram.

Purchased WHS 2011 as SBS was total overkill and very expensive for home use -> everything working great.


cont...
 
Mission - Part 2 - Virtulize the WHS 2011 and get more use out of the hardware including running some Minecraft Linux boxes.

Upgrade the i3 to an i5 in order to support the VT-x instruction set, install VMWare ESXi -> Try to map the arrays I have to the new WHS vm and find I need to be able to passthrough the controllers to the VM which requires VT-d support. The processor supports it, the MB manual says the motherboard supports it, the MB does not support it.

Purchase an ASRock Extreme 4 which is tested as supporting VT-d (Z68 chipset) and has 3 PCI slots (4x + 8x + 8x or 4x + 16x) -> My 1068e controllers only work in one of the PCIe slots, the cards are PCIe 1.0a and two of the slots are PCIe 2.0 only.

Try PCI-X Adaptec 2410SA (4 sata drives) controller (bought in error and put in storage) works put only in 32bit mode as there are no PCI-X slots on the MB. My Adaptec 1405 (also bought, as it was advertised as a Raid card and turned out to be a HBA, and stored) also worked so I had the missing cards 8 ports back again.

ESXi won't play nice with the 1405 in passthrough mode although it sees it fine in non passthrough mode. Installing the drivers in WHS2011 VM caused ESXi to pink screen (same as BSOD on Windows). Trying to start the WHS 2011 VM causes the whole of ESXi to lockup after the VM gets to the login screen. To be fair, the 1405 is not on the list of supported adapters for ESXi

Move on to MS Hyper-V. Install and set the base configuration. Download the remote management utility only to find it requires Win Server / Win Vista / Win 7 Pro or above and you guessed it, I have Win 7 Home Premium on my machines at home.

Tried ESXi once again to confirm the problem. Problem confirmed but I did manage to install to the memory stick this time. Cannot slot in the drive I originally installed ESXi on as if I do, ESXi pink screens telling me another installation drive is present and it cannot continue. Why it cannot let me format the drive rather than going belly up is very :confused:.

Pulled the two Adaptec cards and am now running on one LSI 1068e in the third PCIe slot (half speed for the card). I can currently still connect all my drives but have 8 bays not connected (one bay in the case appears to be faulty as well :(). ESXi is running fine with WHS and with a CentOS Minecraft server (Minecraft server software and world running from ramdisk).

Yes I appreciate I could probably have bought a ready made server for what I have spent on this but where is the fun in that.


cont...
 
Next steps;
1. Up the ram to 16GB.
2. Find 2x PCIe 2.0 8 (or 1x16 drive card) drive cards the ESXi supports and retire the last 1068e.
3. Finish building and hardening my CentOS install then find out how to turn one in to a template to build other VMs quickly.
4. Its Pims-O'Clock...;)

Have seen the LSI 9211 which is PCIe 2.0 and ESXi supported for around 240USD.

Any other recommendations (adaptec or the like) ?

RB

Update: The LSI 9201 is removed as its price has jumped from 360USD to 450USD so either it was a deal just missed or an error on their site that was caught.
 
Last edited:
Oh dear,

On doing more reading, I may have changed my mission plan....

I have just realised I have a MB which works with the 1068e controllers fine and a case with 20 hotswap bays and an i3 2100 and 8GB ram which I could then use to make either a SAN or DAS and get a 2U case to host the i5 2400, ASRock q68 Extreme 4 and the virtulization side of things.

I would need a 2U case & PSU. I would also have to see the best way of connecting the two units together. Multiport NICs or multiple NICs possible IIRC from a home SAN build I read a while ago that I am trying to find now.

This could get even more interesting.

Update:

Found the link to the cheap san build here (SmallNetBuilder).

Found a doc by VMWare concerning iSCSI over Ethernet here (pdf).

Seems like it may be possible but as I would like it predominately for my NAS drives rather than my VM drives it is not looking like it will help as there will be no passthrough to the VM.

RB
 
Last edited:
Now that Vsphere 5 Enterprise is out, I`m hoping the whitebox parts list has increased, esp cheap raid cards.

The LSi 9211-4/-8 is about the best I can find for cheap that is PCIe 2.0. Retails for around 250USD new and 230USD on ebay.

My LSi 1068e was dirt cheap at 8 ports for 95USD but is now sold out. The Dell Perc 6IR (note the IR at the end and lack of heatsink on the chipset), IBM BR10i and Supermicro AOC-SASLP-MV8. All can be had for around 70-130USD on EBay. Note that this is the US EBay though and the UK Ebay may be more expensive.

I guess it all depends on what your raid requirements are (the above mainly do 0, 1, 10 or so) and what your definition of cheap is ;).

I may have my Adaptec and LSi cards available although not sure if shipping from here will make it not such a viable option.

RB
 
Well....

Looking back at the LSI 9201-16i HBA it seems that although it is not listed as supported, LSI has a firmware flash that can be applied to get it working with ESXi here. Getting this card is 50USD cheaper than getting the two 9211-8i cards and it only takes up one slot.

I have also come across the Intel RES2SV240 SAS expander which has 6 ports (one or two need to be output). Pair that with a LSi 9211 (4 or 8 port) and there are even more possibilities.

Ok, now for some maths (please correct any glaring mistakes).

For SAS 6Gbps each device is given 6Gbps raw or 600MBps data and each connecter services 4 devices so each connector can do 2.4GB/s
For SAS 3Gbps each device is given 3Gbps raw or 300MBps data and each connecter services 4 devices so each connector can do 1.2GB/s
(6*4*6) = 6connectors, 4 devices per connector, 6Gbps per device


First, SAS expander (6*4*6Gbps)
Input (to drives)
4 Ports = 4x2.4GB/s = 9.6GB/s.
5 Ports = 5x2.4GB/s = 12GB/s.
Output (to controller)
2 ports = 2x2.4GB/s = 4.8GB/s.
1 port = 2.4GB/s.
Result
So max output = 4.8GB/s with two channels connected to SAS controller or 2.4GB/s with only one connected.

My current LSi PCIe 1.0e card (2*4*3Gbps)
Input
2 Ports input (to expander) = 2.4GB/s.
Output
PCIe 1.0a x4 = 250MB/s x 4 lanes = 1GB/s (*card is forced to run as a PCIe x4 in my current MB but would usually run as a PCIe 1.0a x8)
Result
So the card will do upto 2.4GB/s input and 1GB/s output.

So the expander with my current LSi card will do;
2.4GB/s is the slowest link (expander to HBA) / 16 devices = 150MB/s per device.

The LSi 9211-4i
Input
1 Ports input (to expander) = 2.4GB/s (20 devices via expander = 120MB/s per device).
Output
PCIe 2.0 x4 = 500MB/s x 4 lanes = 2GB/s
Result
So the card will do 2.4GB/s input and 2GB/s output.

The LSi 9211-8i
Input
2 Ports input (to expander) = 2x2.4 = 4.8GB/s (16 devices via expander = 300MB/s per device).
Output
PCIe 2.0 x8 = 500MB/s x 8 lanes = 4GB/s
Result
So the card will do 4.8GB/s input and 4GB/s output.

Summary:
2 PCIe 2.0x8 - 8 port cards will allow up to the full 600MB/s per device with 16 devices connected.
1 PCIe 2.0x8 - 8 port card plus the SAS expander will allow up to300MB/s with all 16 devices connected (half the bandwidth from card to MB).
1 PCIe 2.0x4 - 4 port card plus the SAS expander will allow up to 100MB/s with all 20 devices connected (half the bandwidth from card to MB pluss another 4 drives can be connected as there is only one link from expander to SAS card).

Now these figures are for max throughput is all devices are running at the same time which is very unlikely to be the case.

So 2 cards would be a better bet with lots of SSDs (60GB Vertex 2s are around 100USD each now). A PCIe 2.0 x8 card and expander should do fine for SSDs and hard disks all running together but for just having HDDs and not having them runn all at the same time, a PCIe 2.0 x4 single port card and SAS expander should do.

RB

Update: Nope my sums were not correct as I did them working on 6/3Gbps/connector to the controller and not per connector to device. All fixed now.
 
Last edited:
Back
Top Bottom