The (un)Official VMWare ESXi thread.

Yes they have green circles like the pic.

In that case you should be able to select the VM in the left pane, right click and choose edit configuration. On the window that comes up select add -> PCI device and you should get a list of the devices you have passed through.

Note that the VM needs to be powered off in order to be able to do this. You cannot do it when the VM is running. You can only add hard drvies and vNICs, not passthrough NICs or other items (IIRC).

RB
 
You should do a screencast, RimBlock ;)

Don't forget that using pass through means you can no longer take snapshots or suspend the VM. I think it also means you can't hot-add CPU or RAM either?
 
You should do a screencast, RimBlock ;)

Don't forget that using pass through means you can no longer take snapshots or suspend the VM. I think it also means you can't hot-add CPU or RAM either?

I suspect GodAtum had decided to move on from this project as he has been quiet for a while. This is one reason I went in to providing server solutions for people as I have been through this level of frustration getting stuff working which isn't even though you can see no reason for it not too :D.

Linking in a post I did in the WHS 2011 thread concerning SAS, SATA, controllers and expanders for reference. Post is here.

RB
 
I'm now using Workstation 9 to play around with VMs. The issue I have is I have 2 VMs and using NAT, when I try to use bridged and use IPs form my LAN the host PC looses Internet. I've checked the IPs are not duplicated and have even turned off dhcp (in vmware) and configured static IPs for the VMs.
Using NAT would be fine in general but I can't use apps on my iPad to connect remotely as the IPs are NAT and not real LAN ones.
What's the best way to set up the vmware network?
 
Managed to get ESXI working for 5.0! I swapped the Mellanox cards around fro my PC and it works fine now! Wierd!

Anyway, my next question is how can I back up EXRI VMs while they are still running to an external esata drive? I've heard of Veeam but i dont think it's quite what I'm looking for?
 
Last edited:
Hi I have discovered on the ESXI forums that PCI passthrough is broken in version 5.1 (http://communities.vmware.com/thread/417736?start=0&tstart=0). So i will install Ubuntu straight.

Thanks for the link, interesting reading.

To give an overview for others not wishing, or without the time to read the current 4 pages, here is an overview.

It would seem that PCI and USB passthrough is not working correctly / at all with vSphere 5.1 including where it worked fine with vSphere 5.0 u1. This is affecting some PCI-X cards as well as some motherboard devices.

A resolving patch for this issue is scheduled for after an initial realease patch is released. Initial reported eta is 2-3 months.

The passthough problem results in VMs with PCI (PCI-X) devices attached via passthrough resulting in the server crashing with the PSOD when the VM is started. Whilst a lot of devices reported as being affected are not officially supported for passthrough (i.e. motherboard chipsets) some are on the vSphere HCL and users have reported them working fine in vSphere 5.0 u1.

The USB passthough issue is where a USB device can be marked for passthrough but on rebooting the server it is not passed through. The log files from affected users seem to indicate that vSphere is not allowing passthrough even though it is detecting that the configuration is stating that passthrough was selected.

It is unclear if the patch 'in the works' for the PCI passthrough issue will fix the USB passthrough issue or not or if this is a new direction for VMware in the disabling of passthrough on USB devices (motherboard and add-on cards are both affected). No offical response has been made on the USB issue that I am aware of.

RB
 
You are not comparing like for like... The F3s are performance drives, the WD Greens are power saving drives. They are aimed at different market segments with different expectations on performance / capacity at a specific price point.

The F3s average around 120MB/s read and 93MB/s write according to a review on Bit-Tech here.

Bit-Tech also have a review of the Greens, luckily, and benchmark them as having an average read of 79MB/s and an average write of 53.1MB/s here.

As your own benchmark shows, the Greens are slower than the F3s.

My question would be more about the F3 write performance, why is it higher than the read and higher than the Bit-Tech results whith the other values being pretty close. I would guess there is some caching going on somewhere.

RB
 
You are not comparing like for like... The F3s are performance drives, the WD Greens are power saving drives. They are aimed at different market segments with different expectations on performance / capacity at a specific price point.

The F3s average around 120MB/s read and 93MB/s write according to a review on Bit-Tech here.

Bit-Tech also have a review of the Greens, luckily, and benchmark them as having an average read of 79MB/s and an average write of 53.1MB/s here.

As your own benchmark shows, the Greens are slower than the F3s.

My question would be more about the F3 write performance, why is it higher than the read and higher than the Bit-Tech results whith the other values being pretty close. I would guess there is some caching going on somewhere.

RB

OK Thanks for that.

The end user was highly 'price sensitive' and these drives were the cheapest I could find for him. I didn't think they would be 'that much' slower than mine!

Thanks for your reply. It's put my mind at rest. I was thinking there might be some kind of configuration issue I'd need to look into but it appears not :)
 
.

I have moved some of my Linux VMs to SSD and they have been fine for a while. They are setup like this;

Minecraft Server (CentOS) - Vertex II VM Datastore (60GB)
SABnzb OS (CentOS) - Vertex II VM Datastore (60GB)
SABnzb data area - WD Scorpio Black (500GB)
vSphere Swap areas - Vertex II Swap (60GB)

For ease of setup the SABnzb data area drive is mounted on /home

Yesterday I noticed a number of errors from SABnzb which is quite unusual. It had been a while so I rebooted and after quite a while it came back up but was very slow with pauses of a couple of minutes evey minute or so. On checking the log (/var/log/messages) I noticed mention of a filesystem check failing on the SABnzb data area drive. I unmounted the drive and ran fsck.ext4 on it which took a very long time, especially with the pauses and reported a number of file system errors. The pauses still persisited. I then removed the drive from the VM (but not the server) and added another new virtual drive on a different datastore (SAN) but this made no difference.

My next thought was that the Vertex II they are installed on may be having issues so I tried to start the Minecraft Server which had been down and it took a long time to start (starting VMs were getting stuck at 95% for quite a while). Once it was up I have problems accessing via putty and console.

I then created a new VM on a spare SSD I have around but not used yet (Agility 3 120GB). As the install ISO was on the Vertex II Swap drive the install was lovely and fast but I was still seeing pauses even on this VM.

I moved the swap off of the Vertex II Swap on to a WD Scorpio 320GB I had spare and had a look at the vSphere server logs from the console. First thing I noticed was that the last entries in all the logs was 1st Nov. I do not know whether this is a time config issue on the vSphere server or a logging issue. I will check tonight. I could see a number of heartbeat timeouts for the SSDs (all three) repeating over and over again. I rebooted the vSphere server but still had the pausing on the VMs for the most part. Putty to the new SABnzb appeared fine though. The new SABnzb data area was put on my SAN so I can remove the Scorpio Black 500GB for a full health check in my desktop machine. The Minecraft and old SABnzb VMs are left down.

I believe the pauses are related to the SSD timeouts. All my drives are connected to a IBM M1015 (flashed to LSI 9211-8i IT firmware) as it is a SATA III controller and the motherboard only has 2x SATA III connections. The SSDs are split over two different SAS cables to the controller. I also have another Agility 3 120GB which is not showing up on the vSphere server at all. Possibly a dead drive but have not checked yet. It has been there for some time and last night was the first I had seen an issue.

The vSphere server is only used for my own home mini servers and testing since I have moved my business stuff to a dedicated HP ML110 G7. I can, therefore, pull drives etc without too many issues.

I do have some Intel 520 120GB SSDs around but they are new stock so would rather not use them as I would then have to pay for them and I would rather not have the business buy them for "own use" right now :(.


Any suggestions for narrowing down the problem ??

RB
 
The obvious one is vSphere Client (or vCenter if it is a paid version) unless you are after something specific these do not offer.

There are a few free tools I have seen available on one of the VMware webcasts. I saved the website links. I will look at putting them up when I get home.

RB
 
I'm lookingat Hyperic to keep a record of system uptime, CPU etc. I cannot find much on Google about how to install the Hyperic agent in ESXi though.
 
Ok, VMTurbo Operations Manager has a community version here.

May be worth taking a look. I have not personally tried it but it was reported on favorably on a VMware webcast.

RB
 
I have a box running ESXi.

ESXi is installed on a USB drive and then I have three disks:

1 x 250Gb SATA
1 x 2TB SATA
1 x 2TB SATA

I want to run the following

1 x DC
1 x File and Print / Media server

What is the best way to use the disks?
 
I have a box running ESXi.

ESXi is installed on a USB drive and then I have three disks:

1 x 250Gb SATA
1 x 2TB SATA
1 x 2TB SATA

I want to run the following

1 x DC
1 x File and Print / Media server

What is the best way to use the disks?

Really depends on what you are going to use as the DC / Media Server software.

WHS 2011 will do the lot but you need to setup the DC role which is not officially supported. Also depends on what you mean by media server. Are you talking about sharing files on shared drives or streaming media including transcoding it in to a format the playing device can handle. The first is easy, the second is a bit more tricky.

How much media do you have (space wise) to share and how much to you imagine it will grow ?.

Do you want redundancy for all your data or only some data or no data ?.

RB
 
Back
Top Bottom