Virtualization: Best Windows based (Virtualbox, VMWare, etc).

Well I got the client and installed it. Created a VM for WHS2011 (2Ghz cpu, 180GB disk and 2GB ram which is probably what my WHS install to this box was using day to day) and it took 1 hour to get to 31% of expanding files in the install process :(. I cancelled and upped the cpu to 6Ghz and the ram to 8GB and it installed in 30 minutes. I have since throttled them back dow again and it seems to be running fine.

I also installed CentOS 6 minimal (200+MB download), installed wget, Java and Minecraft and the minecraft VM is running very happily although I have to sort out the iptables rules to let it through the firewall (could connect with iptables stopped but not when running).

This took me up to 1am this morning (up at 6:30 for work so not feeling so clever at the moment :)).

Now comes the kicker in the sensitive bits....

The WHS2011 is acting as a NAS with around 11TB of data (5.5TB live and the same as backup) sitting on the host ready to be used by the WHS VM. I just need to just make these drives available to the VM and I can see them in ESXi. Well it seems to be able to pass through the the VM you need to support VT-d (direct IO to Virtulization).

That should be fine as my new shiny i5 2400 supports the VT command sets which is why I upgraded from the i3. Nope... it seems I also need a motherboard that supports it and my Asus P8H67-V doesn't :mad:.

This is the third MB for this project, first was an old LGA775 board that could not support two 8 channel HBAs for connecting 16 drives. The second, an intel LGA775 server board, could support the HBAs and a PCI graphics card just not the PCI graphics card I bought as there was a little known hardware clash with that board and my graphics card chipset. This 3rd board was doing everything I wanted until now.

Thing is, I don't recall Virtual Box requiring anything in order to share my hosts drives directly to the virtual machine when used on my C2Q 9450 workstation. Is there seriously no way around this ?.

Apart from that I am pretty happy with ESXi (which doesn't use any sort of Linux kernel but a special VM kernel that looks remarkably like a custom compiled Linux kernel :)). After I have sorted out the drive issue I can test network speed to make sure ESXi is not affected by the Linux kernel bug that causes my NIC IRQ to get disabled.

Thanks for all the pointers so far, has made it so much easier although finding the VI Client was really not so intuitive and not mentioned in the ESXi management guide I was reading (could have missed it or maybe it was outdated or the like).

RB
 
Doing well so far :)

Now comes the kicker in the sensitive bits....

The WHS2011 is acting as a NAS with around 11TB of data (5.5TB live and the same as backup) sitting on the host ready to be used by the WHS VM. I just need to just make these drives available to the VM and I can see them in ESXi. Well it seems to be able to pass through the the VM you need to support VT-d (direct IO to Virtulization).

Did you install ESXi to the hard disk or are you booting off USB?
What format are those partitions the data is stored on - NTFS / FAT32?
 
Doing well so far :)



Did you install ESXi to the hard disk or are you booting off USB?
What format are those partitions the data is stored on - NTFS / FAT32?

ESXi was installed to my WD Blue 320GB hard drive. I did have a memory stick connected but it did not come up as a choice and I didn't google how to do it as I wanted to get a 'proof of concept' going first.

The disks are a mix of NTFS, NTFS Dynamic (spanned and stripped, one set of each). Installing and using the drives directly connected to the MB is ok but the data drives are connected to my LSI 1068e based controllers which ESXi can see when installed and list the drives but I cannot pass control or share them to the VMs unless I create a new datapool / datadisk which means wiping the disk and creating a virtual disk file on the disk.

RB
 
Hmmm... there is an experimental method of raw device mapping and NTFS discussed here but I wouldn't recommend it myself:

http://communities.vmware.com/message/1581806?tstart=0

I think you'd need to be in a position to migrate the data from the NTFS to another system and then import it to a VMDK file for use on VMware.
Or consider the physical to virtual conversion thread that was referenced on here earlier this week?
 
I bought a replacement Asrock Z68 Extreme 4 today which supports VT-d and has the 3 PCIe slots (two needed for my LSI controllers) but after putting it all together and installing ESXi on a USB stick, it will not recognise my LSI controllers on the first or second PCIe x16 slots :mad:. I can only get one controller working at a time.

The hardware pass through seems to work though :(.

RB
 
Ok, found the issue, the HBA cards are only PCIe 1.0a and the ports are PCIe 2.0.

Looks like I will be going for a couple of Adaptec 2805s unless there are any other suggestions.

I have an Adaptec 1405 (PCIe x4 4 port hba) and a 2410SA (PCI-X) so I can cobble something together for now.

One issue I do have though is that I cannot use the hard drive I originally installed ESXi to as when I plug it in I get a Pink screen of death. Is there no way in ESXi for me to clear the disk so I can reuse it without having to put it in another machine for formatting.

Cheers
RB
 
Well it seems the Adaptec 1405 doesn't play nice with ESXi.

After doing a passthrough to WHS, I installed the 2008r2 drivers for the card as it was seen as an unknown SAS controller and ESXi pink screened again. Since then, every time I start the WHS VM, after it gets to the login screen it locks up the whole of ESXi. Shame as it was looking quite good.

Now moving on to MS Hyper-V to give that a try resulted in finding out you can only remotely manage it from either Win7 Pro (and above) or Win Server. As I have Win 7 Home Premium that is also a waste of time.

Now trying ESXi one more time to make sure it was not a silly error on my part and have reinstalled the WHS2011 VM.

If that fails then I will have a look again at Xen, Proxmox and then OpenVM/KVM from a base CentOS install if I need too as I have hopefully removed the Linux hardware issue now I have changed the MB.

RB
 
Last edited:
Thanks Bods.

I'll put a quick summary for those not wanting to go through the wall of text above and try to keep it short.

Mission - Part 1 - Home Nas to replace my DLink DNS-323 (Great box but a little slow on network speeds).

Extra ram for an old PC (4GB total), 3 New WD green 1.5TB drives, 2 second hand WD Green 1.5TB drives, Seagate 1TB 7.2k boot drive, 2xIntel NICs (PCI), CentOS 5 as the OS.
CentOS 5 cannot handle advanced format drives which the WD are -> Upgrade to Fedora 14 which can.

WD drive configured as a Raid 5 array -> drive start hanging during movie playback. One drive after another RMA'd due to bad sectors. All 5 drives RMA'd within 3 months.

Replaced drives with 4x1TB WD Blacks (live raid), 2x2TB WD Greens (spanned for backup storage) -> all good.

Replace PC box with Norco 4020 20 bay rack case (4u). Managed to get 2xLSI 1068e based SATA/SAS cards cheaply (8 drives each) and the rest can connect to the MB sata ports -> Buy an Intel MB (socket 775) with two PCI slots for the storage cards and a PCI card for video -> Video card chipset has a clash with that MB.

Replace MB with an Asus P8H67-V and an i3 (for integrated graphics, lower power usage) -> One of the NICs damaged by me (socket broken due to ham fisted removal of stuck network plug) -> Linux Kernel disables my surviving network cards irq a few minutes after it is used resulting in 2MB/s max network speed (Gbit network).

Test Windows SBS 2011 as there were a lot of features I liked the look of -> ran dog slow on 4GB so upgraded to 12GB ram.

Purchased WHS 2011 as SBS was total overkill and very expensive for home use -> everything working great.


cont...
 
Mission - Part 2 - Virtulize the WHS 2011 and get more use out of the hardware including running some Minecraft Linux boxes.

Upgrade the i3 to an i5 in order to support the VT-x instruction set, install VMWare ESXi -> Try to map the arrays I have to the new WHS vm and find I need to be able to passthrough the controllers to the VM which requires VT-d support. The processor supports it, the MB manual says the motherboard supports it, the MB does not support it.

Purchase an ASRock Extreme 4 which is tested as supporting VT-d (Z68 chipset) and has 3 PCI slots (4x + 8x + 8x or 4x + 16x) -> My 1068e controllers only work in one of the PCIe slots, the cards are PCIe 1.0a and two of the slots are PCIe 2.0 only.

Try PCI-X Adaptec 2410SA (4 sata drives) controller (bought in error and put in storage) works put only in 32bit mode as there are no PCI-X slots on the MB. My Adaptec 1405 (also bought, as it was advertised as a Raid card and turned out to be a HBA, and stored) also worked so I had the missing cards 8 ports back again.

ESXi won't play nice with the 1405 in passthrough mode although it sees it fine in non passthrough mode. Installing the drivers in WHS2011 VM caused ESXi to pink screen (same as BSOD on Windows). Trying to start the WHS 2011 VM causes the whole of ESXi to lockup after the VM gets to the login screen. To be fair, the 1405 is not on the list of supported adapters for ESXi

Move on to MS Hyper-V. Install and set the base configuration. Download the remote management utility only to find it requires Win Server / Win Vista / Win 7 Pro or above and you guessed it, I have Win 7 Home Premium on my machines at home.

Tried ESXi once again to confirm the problem. Problem confirmed but I did manage to install to the memory stick this time. Cannot slot in the drive I originally installed ESXi on as if I do, ESXi pink screens telling me another installation drive is present and it cannot continue. Why it cannot let me format the drive rather than going belly up is very :confused:.

Pulled the two Adaptec cards and am now running on one LSI 1068e in the third PCIe slot (half speed for the card). I can currently still connect all my drives but have 8 bays not connected (one bay in the case appears to be faulty as well :(). ESXi is running fine with WHS and with a CentOS Minecraft server (Minecraft server software and world running from ramdisk).

Yes I appreciate I could probably have bought a ready made server for what I have spent on this but where is the fun in that.


cont...
 
Next steps;
1. Up the ram to 16GB.
2. Find 2x PCIe 2.0 8 (or 1x16 drive card) drive cards the ESXi supports and retire the last 1068e.
3. Finish building and hardening my CentOS install then find out how to turn one in to a template to build other VMs quickly.
4. Its Pims-O'Clock...;)

Have seen the LSI 9211 which is PCIe 2.0 and ESXi supported for around 240USD.

Any other recommendations (adaptec or the like) ?

RB

Update: The LSI 9201 is removed as its price has jumped from 360USD to 450USD so either it was a deal just missed or an error on their site that was caught.
 
Last edited:
Oh dear,

On doing more reading, I may have changed my mission plan....

I have just realised I have a MB which works with the 1068e controllers fine and a case with 20 hotswap bays and an i3 2100 and 8GB ram which I could then use to make either a SAN or DAS and get a 2U case to host the i5 2400, ASRock q68 Extreme 4 and the virtulization side of things.

I would need a 2U case & PSU. I would also have to see the best way of connecting the two units together. Multiport NICs or multiple NICs possible IIRC from a home SAN build I read a while ago that I am trying to find now.

This could get even more interesting.

Update:

Found the link to the cheap san build here (SmallNetBuilder).

Found a doc by VMWare concerning iSCSI over Ethernet here (pdf).

Seems like it may be possible but as I would like it predominately for my NAS drives rather than my VM drives it is not looking like it will help as there will be no passthrough to the VM.

RB
 
Last edited:
Now that Vsphere 5 Enterprise is out, I`m hoping the whitebox parts list has increased, esp cheap raid cards.

The LSi 9211-4/-8 is about the best I can find for cheap that is PCIe 2.0. Retails for around 250USD new and 230USD on ebay.

My LSi 1068e was dirt cheap at 8 ports for 95USD but is now sold out. The Dell Perc 6IR (note the IR at the end and lack of heatsink on the chipset), IBM BR10i and Supermicro AOC-SASLP-MV8. All can be had for around 70-130USD on EBay. Note that this is the US EBay though and the UK Ebay may be more expensive.

I guess it all depends on what your raid requirements are (the above mainly do 0, 1, 10 or so) and what your definition of cheap is ;).

I may have my Adaptec and LSi cards available although not sure if shipping from here will make it not such a viable option.

RB
 
Well....

Looking back at the LSI 9201-16i HBA it seems that although it is not listed as supported, LSI has a firmware flash that can be applied to get it working with ESXi here. Getting this card is 50USD cheaper than getting the two 9211-8i cards and it only takes up one slot.

I have also come across the Intel RES2SV240 SAS expander which has 6 ports (one or two need to be output). Pair that with a LSi 9211 (4 or 8 port) and there are even more possibilities.

Ok, now for some maths (please correct any glaring mistakes).

For SAS 6Gbps each device is given 6Gbps raw or 600MBps data and each connecter services 4 devices so each connector can do 2.4GB/s
For SAS 3Gbps each device is given 3Gbps raw or 300MBps data and each connecter services 4 devices so each connector can do 1.2GB/s
(6*4*6) = 6connectors, 4 devices per connector, 6Gbps per device


First, SAS expander (6*4*6Gbps)
Input (to drives)
4 Ports = 4x2.4GB/s = 9.6GB/s.
5 Ports = 5x2.4GB/s = 12GB/s.
Output (to controller)
2 ports = 2x2.4GB/s = 4.8GB/s.
1 port = 2.4GB/s.
Result
So max output = 4.8GB/s with two channels connected to SAS controller or 2.4GB/s with only one connected.

My current LSi PCIe 1.0e card (2*4*3Gbps)
Input
2 Ports input (to expander) = 2.4GB/s.
Output
PCIe 1.0a x4 = 250MB/s x 4 lanes = 1GB/s (*card is forced to run as a PCIe x4 in my current MB but would usually run as a PCIe 1.0a x8)
Result
So the card will do upto 2.4GB/s input and 1GB/s output.

So the expander with my current LSi card will do;
2.4GB/s is the slowest link (expander to HBA) / 16 devices = 150MB/s per device.

The LSi 9211-4i
Input
1 Ports input (to expander) = 2.4GB/s (20 devices via expander = 120MB/s per device).
Output
PCIe 2.0 x4 = 500MB/s x 4 lanes = 2GB/s
Result
So the card will do 2.4GB/s input and 2GB/s output.

The LSi 9211-8i
Input
2 Ports input (to expander) = 2x2.4 = 4.8GB/s (16 devices via expander = 300MB/s per device).
Output
PCIe 2.0 x8 = 500MB/s x 8 lanes = 4GB/s
Result
So the card will do 4.8GB/s input and 4GB/s output.

Summary:
2 PCIe 2.0x8 - 8 port cards will allow up to the full 600MB/s per device with 16 devices connected.
1 PCIe 2.0x8 - 8 port card plus the SAS expander will allow up to300MB/s with all 16 devices connected (half the bandwidth from card to MB).
1 PCIe 2.0x4 - 4 port card plus the SAS expander will allow up to 100MB/s with all 20 devices connected (half the bandwidth from card to MB pluss another 4 drives can be connected as there is only one link from expander to SAS card).

Now these figures are for max throughput is all devices are running at the same time which is very unlikely to be the case.

So 2 cards would be a better bet with lots of SSDs (60GB Vertex 2s are around 100USD each now). A PCIe 2.0 x8 card and expander should do fine for SSDs and hard disks all running together but for just having HDDs and not having them runn all at the same time, a PCIe 2.0 x4 single port card and SAS expander should do.

RB

Update: Nope my sums were not correct as I did them working on 6/3Gbps/connector to the controller and not per connector to device. All fixed now.
 
Last edited:
Oh dear, another raid card has come in to the picture...

It seems that the IBM M1015 is a rebadged LSi 9240-8i which is their entry level raid controller but with raid 5 disabled until you buy a key from IBM. The price difference between the 9211-8i and the 9240-8i is around 30USD and the M1015 can be flashed with the IT firmware making it in to a HBA (i.e. a 9211-8i).

Now the downs... it seems the M1015 is not supported by ESXi and although the LSi 9240 is, reports are that the card hangs ESXi loading the drivers.

Now more ups... it also seems that flashing it to the IT firmware works fine with ESXi and I have found someone selling some from new server pulls for around 100USD cheaper than buying new per card. Now it just depends on whether they will sell to me as I am not in the US or Europe.....

RB

Update: ok took the plunge (what harm has that done me before......:() and after finding someone selling new IBM ServeRaid M1015s for 105USD (each) inc US shipping I have bought two. A single LSI 9211 is around 260USD. Found someone reviewing the Lenovo TS200 that comes with a M1015 and stating it works fine in ESXi but if not then I have full instructions to flash it.

As an aside, I still do not know how to take a configured virtual machine and make it a template so I can make other VMs based on it. I tried copying the VM hard drive file to another directory then creating and the new VM with the same parameters and linking it to the copy of the original VM drive but it boots and cannot see a network card. It is the installing and configuring I wish to get away from for any new CentOS minimal installs as it has taken quite a bit of time to get one machine up to where I would like it.
 
Last edited:
As an aside, I still do not know how to take a configured virtual machine and make it a template so I can make other VMs based on it

I didn't realise till your post that the facility to do this is unavailable without a Virtual Center server.
Google provides many ways to achieve this, the most common method seems to be using VMWare converter to take a clone of your machine and then use that clone as a template.

Another method is to copy the entire contents of the VM folder to another folder (using vmkfstools via the CLI) , then right click the vmx file and select 'add to inventory'.
A few people say this method is much faster, but others say it just doesn't work.

Give them a shot and see what works for you.
 
I didn't realise till your post that the facility to do this is unavailable without a Virtual Center server.
Google provides many ways to achieve this, the most common method seems to be using VMWare converter to take a clone of your machine and then use that clone as a template.

Another method is to copy the entire contents of the VM folder to another folder (using vmkfstools via the CLI) , then right click the vmx file and select 'add to inventory'.
A few people say this method is much faster, but others say it just doesn't work.

Give them a shot and see what works for you.

Had a look at Google but there seems to be quite a bit of conflicting info hence the question.

I will have a go with the CLI option.. Presumably I can go this while other VMs are still running as long as the 'template' VM is shutdown ?

Thanks
RB
 
It's a little more involved than just copying the files. You need to edit a couple of them to remove references to the original VM and also rename the new files in the directory.

The other option is to try the Virtual Center software on eval - it is fully functional 60 days.
 
Back
Top Bottom