The (un)Official VMWare ESXi thread.

I have now added some more disks to my home server and have come to a point where I will no longer be using a dedicated controller to pass drives to my Windows Server VM I am using for sharing media.

This now gives me a bit of an issue.... How to give my Win Server VM storage and allow it to act as if it is a single large disk.

Setup.
Starting Config...
- 3x 2TB Seagate Barracudas in raid 5 on HP P812 SAS controller.
- 2x 2TB Seagate Barracudas (individual drives presented to WIn Server VM vi VT-d on a M1015).
- 1x 2TB WD green (individual drive presented to WIn Server VM vi VT-d on a M1015).

Intermediate Config...
4TB storage from Raid 5 array presented to Win Server VM so data can be copied over from the 2x 2TB Seagate drives.

Final Config...
8TB presented to the Win Server VM made up of a 5x 2TB Seagate Barracuda raid 5 array on the HP P812 controller.

Note the P812 also has other arrays on it so I cannot do a direct VT-d passthrough.

Thoughts.
1. Span 2TB VMDKs in Win Server to make one large drive.
- Possible to expand without loosing data ?.

2. Create a CentOS VM to act as a NFS server.
- Limited by Win Server only being able to use ET1000 network interface and if so is there a way around it.

3. Create an Openfiler VM much the same as CentOS in no 2.

4. Create a OpenIndiana VM for ZFS but the disks are already on a decent raid 5 controller with 1GB FBWC.

This is for a home setup so it does not have to be fully belt and braces but then again I am not adverse to looking at various solutions to gain knowledge.

I am using vSphere 5.1 free but I may move to vSphere 5.1 foundation is it helps.

Any other suggestions most welcome.

THanks
RB
 
I currently use OpenIndiana with ZFS, it gives you so much more flexibility over a hardware RAID controller and has lots of advanced features. Head over to Hard Forum and lookup 'gee's thread - he is the master of ZFS.

It is an option but how do you pass a OI VM ZFS filesystem to another VM and get around the 1GbE network limitation ?. Using the VMXNET3 adapter gives 10GbE but has an issue with LACP (VM Host to Switch) so I cannot use and internal LACP (VM to VM) is not available in free vSphere version.

RB
 
Ok, here is a new one someone may have sorted out before.

Those following my fun times with the P812 will know it lost the config and I seem to have amanged to get it back and it looks like the data is still on my main data drive (5x2TB in raid 5).

I have got a machine running with the card and can see all the drives in vSphere.

The 8TB datastore is cut in to 2TB VMDKs to present to the SBS 2011 VM which then spans them together.

In vSphere: Host configuration -> Storage -> Devices I can see the mounted volume. Selecting it shows a partition listed as VMFS.

When I got to "add storage" I can select the partition but the only option I then have is to format it rather than import it.

I can stand to loose the data if I can't get it back but would rather recover if possible.

Any suggestions ?.

Thanks
RB
 
I do not get the option to mount, only to format :(.

I did activate the ssh console and have a loot with gparted which reported the main GPT MBR was corrupted but the backup looked good. I then did a fix and it reported everything is now fine. On importing to vSphere it still reports my only option is to format :(. Using gparted on both this array and a second array I created from scratch, the partitions look the same (exc the sizes) so not sure what is wrong but I seem to be spending too much time so will probably just reformat.

RB
 
The NIC you are talking about are not supported by ESXi. They will not be part of the ESXi virtual networking infrastructure.

When you configured IO Passthrough you hand control of the NIC chipsets to the individual VMs you attach them to (add them as a PCi device in the VMs configuration settings). Now the VM connects directly to the NIC chipset bypassing the ESXi virtual networking infrastructure. The nic chipset is creating a link between the VM and the switch directly so you cannot manage it in ESXi.

If you need to manage the nic in ESXi networking then you need a nic supported by ESXi.

RB
 
Thanks for the explanation. i cannot add the NICs as PCi devices in the VM settings even though i've added them in the DirectPath I/O Configuration section.

If you go to your hosts configuration tab and then advanced, do the network controller chipsets have green circles on them like the picture below or orange arrows ?

ESXifreepassthrough.png


RB
 
Yes they have green circles like the pic.

In that case you should be able to select the VM in the left pane, right click and choose edit configuration. On the window that comes up select add -> PCI device and you should get a list of the devices you have passed through.

Note that the VM needs to be powered off in order to be able to do this. You cannot do it when the VM is running. You can only add hard drvies and vNICs, not passthrough NICs or other items (IIRC).

RB
 
You should do a screencast, RimBlock ;)

Don't forget that using pass through means you can no longer take snapshots or suspend the VM. I think it also means you can't hot-add CPU or RAM either?

I suspect GodAtum had decided to move on from this project as he has been quiet for a while. This is one reason I went in to providing server solutions for people as I have been through this level of frustration getting stuff working which isn't even though you can see no reason for it not too :D.

Linking in a post I did in the WHS 2011 thread concerning SAS, SATA, controllers and expanders for reference. Post is here.

RB
 
Hi I have discovered on the ESXI forums that PCI passthrough is broken in version 5.1 (http://communities.vmware.com/thread/417736?start=0&tstart=0). So i will install Ubuntu straight.

Thanks for the link, interesting reading.

To give an overview for others not wishing, or without the time to read the current 4 pages, here is an overview.

It would seem that PCI and USB passthrough is not working correctly / at all with vSphere 5.1 including where it worked fine with vSphere 5.0 u1. This is affecting some PCI-X cards as well as some motherboard devices.

A resolving patch for this issue is scheduled for after an initial realease patch is released. Initial reported eta is 2-3 months.

The passthough problem results in VMs with PCI (PCI-X) devices attached via passthrough resulting in the server crashing with the PSOD when the VM is started. Whilst a lot of devices reported as being affected are not officially supported for passthrough (i.e. motherboard chipsets) some are on the vSphere HCL and users have reported them working fine in vSphere 5.0 u1.

The USB passthough issue is where a USB device can be marked for passthrough but on rebooting the server it is not passed through. The log files from affected users seem to indicate that vSphere is not allowing passthrough even though it is detecting that the configuration is stating that passthrough was selected.

It is unclear if the patch 'in the works' for the PCI passthrough issue will fix the USB passthrough issue or not or if this is a new direction for VMware in the disabling of passthrough on USB devices (motherboard and add-on cards are both affected). No offical response has been made on the USB issue that I am aware of.

RB
 
You are not comparing like for like... The F3s are performance drives, the WD Greens are power saving drives. They are aimed at different market segments with different expectations on performance / capacity at a specific price point.

The F3s average around 120MB/s read and 93MB/s write according to a review on Bit-Tech here.

Bit-Tech also have a review of the Greens, luckily, and benchmark them as having an average read of 79MB/s and an average write of 53.1MB/s here.

As your own benchmark shows, the Greens are slower than the F3s.

My question would be more about the F3 write performance, why is it higher than the read and higher than the Bit-Tech results whith the other values being pretty close. I would guess there is some caching going on somewhere.

RB
 
.

I have moved some of my Linux VMs to SSD and they have been fine for a while. They are setup like this;

Minecraft Server (CentOS) - Vertex II VM Datastore (60GB)
SABnzb OS (CentOS) - Vertex II VM Datastore (60GB)
SABnzb data area - WD Scorpio Black (500GB)
vSphere Swap areas - Vertex II Swap (60GB)

For ease of setup the SABnzb data area drive is mounted on /home

Yesterday I noticed a number of errors from SABnzb which is quite unusual. It had been a while so I rebooted and after quite a while it came back up but was very slow with pauses of a couple of minutes evey minute or so. On checking the log (/var/log/messages) I noticed mention of a filesystem check failing on the SABnzb data area drive. I unmounted the drive and ran fsck.ext4 on it which took a very long time, especially with the pauses and reported a number of file system errors. The pauses still persisited. I then removed the drive from the VM (but not the server) and added another new virtual drive on a different datastore (SAN) but this made no difference.

My next thought was that the Vertex II they are installed on may be having issues so I tried to start the Minecraft Server which had been down and it took a long time to start (starting VMs were getting stuck at 95% for quite a while). Once it was up I have problems accessing via putty and console.

I then created a new VM on a spare SSD I have around but not used yet (Agility 3 120GB). As the install ISO was on the Vertex II Swap drive the install was lovely and fast but I was still seeing pauses even on this VM.

I moved the swap off of the Vertex II Swap on to a WD Scorpio 320GB I had spare and had a look at the vSphere server logs from the console. First thing I noticed was that the last entries in all the logs was 1st Nov. I do not know whether this is a time config issue on the vSphere server or a logging issue. I will check tonight. I could see a number of heartbeat timeouts for the SSDs (all three) repeating over and over again. I rebooted the vSphere server but still had the pausing on the VMs for the most part. Putty to the new SABnzb appeared fine though. The new SABnzb data area was put on my SAN so I can remove the Scorpio Black 500GB for a full health check in my desktop machine. The Minecraft and old SABnzb VMs are left down.

I believe the pauses are related to the SSD timeouts. All my drives are connected to a IBM M1015 (flashed to LSI 9211-8i IT firmware) as it is a SATA III controller and the motherboard only has 2x SATA III connections. The SSDs are split over two different SAS cables to the controller. I also have another Agility 3 120GB which is not showing up on the vSphere server at all. Possibly a dead drive but have not checked yet. It has been there for some time and last night was the first I had seen an issue.

The vSphere server is only used for my own home mini servers and testing since I have moved my business stuff to a dedicated HP ML110 G7. I can, therefore, pull drives etc without too many issues.

I do have some Intel 520 120GB SSDs around but they are new stock so would rather not use them as I would then have to pay for them and I would rather not have the business buy them for "own use" right now :(.


Any suggestions for narrowing down the problem ??

RB
 
The obvious one is vSphere Client (or vCenter if it is a paid version) unless you are after something specific these do not offer.

There are a few free tools I have seen available on one of the VMware webcasts. I saved the website links. I will look at putting them up when I get home.

RB
 
Ok, VMTurbo Operations Manager has a community version here.

May be worth taking a look. I have not personally tried it but it was reported on favorably on a VMware webcast.

RB
 
I have a box running ESXi.

ESXi is installed on a USB drive and then I have three disks:

1 x 250Gb SATA
1 x 2TB SATA
1 x 2TB SATA

I want to run the following

1 x DC
1 x File and Print / Media server

What is the best way to use the disks?

Really depends on what you are going to use as the DC / Media Server software.

WHS 2011 will do the lot but you need to setup the DC role which is not officially supported. Also depends on what you mean by media server. Are you talking about sharing files on shared drives or streaming media including transcoding it in to a format the playing device can handle. The first is easy, the second is a bit more tricky.

How much media do you have (space wise) to share and how much to you imagine it will grow ?.

Do you want redundancy for all your data or only some data or no data ?.

RB
 
Space you need for current media & projected space required in 6 months ?.

Do you need redundancy or will you be relying on backups ?.

Assuming you are running Plex on Windows Server and not as a standalone NAS appliance, what are you planning on using ESXi for going forwards ?.

ESXi allows you to run multiple virtual machines and how you set it up depends on what you are likely to have running on it. If you only need one machine then you may as well just install to the hardware without ESXi, unless you want ESXi for quick recovery / transportation to other hardware with minimal fuss.

For installing on to bare hardware, the 250GB would probably be used for the OS, 1x 2TB for media / data and 1x 2TB for backup or 2x 2TB for data using raid 1 (mirroring) or some sort of raid on filesystem software.

For installing on ESXi you may use the 250GB for the VM Systems (Windows server having 150GB for example and the rest available for other VMs), 2TB for media and 2TB as datastore for future VMs (backup server etc).

If you are using Plex Nas Appliance then maybe 200GB for Win Server, 50GB (or less) for Plex, 2TB for Win Server datastore (backup plex media), 2TB for plex media (depending on media requirements).

It really is quite difficult to give an accurate suggestion without more information.

RB
 
.

I have moved some of my Linux VMs to SSD and they have been fine for a while. They are setup like this;

Minecraft Server (CentOS) - Vertex II VM Datastore (60GB)
SABnzb OS (CentOS) - Vertex II VM Datastore (60GB)
SABnzb data area - WD Scorpio Black (500GB)
vSphere Swap areas - Vertex II Swap (60GB)

.
.
.

Just for completion I will post my findings and rectification of the issue reported by me above...

As well as having long pauses on the VMs and timeouts in the logs, I also found that I could not start two VMs that had VMDK files on my SAN. I got an "Unable to set lock error".

It turns out there was corruption, due to a crash (PSOD) a while ago, of the lock information relating to the disks and so ESXi could not correctly manage them. Whilst I could have probably fixed the issue, I ended up reinstalling ESXi, removing the lock directories from the drives before importing or starting any VMs, I removed the partitions on the 2x 320GB Scorpio Blacks and the partition on the 1x Agility 120GB that ESXi could not see (it was part of a striped array so the GPT was showing twice the drives disk space), and now it all works fine including the SAN. Reinstalling and importing the VMs was pretty painless and quick and I managed to sort out a could have little niggles that had been bugging me at the same time.

RB
 
I would do a fresh install. It is pretty quick and painless and can be done over the current install (i.e. using the same thumb drive) if desired.

The job list seems good to me. If you have space on a spare drive then you could always make copies of the VMs as a precaution but you should be good to go.

I do not use raw drive mappings though so i am unsure if there would be any other requirements regarding those.

RB
 
Think I may give Convirtures ConVirt a try for VM / Server monitoring and configuration.

Looks good and with historical performance data in the open source version.

RB
 
Yeah I know you can do that. Problem is I want to add disks that already have data on, not have to back up data. Add disk, create datastore, move data back. Not only that but seeing poor disk I disk transfer speeds ? Not sure why i am seeing this?

Have you considered passing through the disks directly to the VM with a compatible controller ?.

What is on the disks (movies, documents, virtual machines) ?.

I used to pass my NTFS disks straight to my WHS 2011 VM using the popular IBM M1015 controller and it worked very well. As it was native control by WHS 2011, the speeds were as expected.

The only issues are making sure the motherboard and cpu supports VT-d. I am not sure if the Micro Server does.

If it doesn't then you could share the drives from a NAS but then that sort of defeats the purpose of teh virtualisation server for a lot of people.

RB

RB
 
Back
Top Bottom