VM storage performance? Perfmon query.

Soldato
Joined
26 Nov 2002
Posts
6,852
Location
Romford
Hi

I have an Win2003 Server running Virtual Server 2005R2. Its running just 10 VM's doing various activities. I want to migrate these VMs to a new Hyper-V server that's connected to 1Gb iscsi Sun NAS.

Will I have trouble running lots of VMs from a 1gig channel? I'm used to using 4GB FC connections, so ethernet iscsi is new to me.

When I look at the perfmon disk counter for my current host server, the 10 VMs are using an about 30,000 Avg. Disk Bytes/Transfer, peaking upto about 60,000. This seems quite low to me, am I looking at the correct counter?

Any advice or tips would be great.

cheers
 
Forgive me, but I'm unfamilar with ESX and diagrams like this.

Storage1 = your Sun SAN?

And the `System` Data` `Citrix` etc are Shares/LUNs?
 
The VMs are mostly app servers, running java/iis/apache type systems. We wont be virtualising any mail, database or file servers anytime soon. And if we do, these will be connected to our better Dothill SAN via FC.

The Sun Storage has 4 x Gbit connectors on the back, we are currently aggrogating 2 of them together (leaving 1 for management and 1 for intersite replication), giving a 2GB connection from the switch, but the actual host servers will probably only connect to the switch via 1 Gbit interface. So for the time being 100MB/s is probably all we can manage.
 
Last edited:
I think you'll be OK with that throughput, it's the actual I/O rather than the throughput that will hammer a SAN/NAS in my experience (especially in most VMs usage). The only thing is that if someone is extracting a big zip file on a VM, or moving loads of big files etc, you don't want the other VMs on the node to crawl to a halt.

I dont think there's an option for throttling disk throughput like you can throttle CPU % per VM.

So how to others setup their VMware/Hyper-V guests with iscsi so this doesnt happen? Do you have multiple LUNs, and only put 2 or 3 guests on each interface. I'm gonna quickly run out of ports if that's the case, as I have multiple Virtual networks already to share the load.

Or is the OS intelligent enough to dymanically assign resources so 1 VM doing big IO doesnt effect the others, and they just share it.?
 
I take it, restarts, patching and disk intensive things like that should just be shedualed so that only one VM is doing that kind of work at anyone time. Things like setting the start time for each VM for 2 mins apart when the host server reboots.

This is all new to me, as all our current VMs are running from local SAS disks, so I've never had to really think about these things before. But we have bought about 48TB of Sun Storage and need to do something with it....
 
I did a quick 2 hour perfmon test of our current VM host and it's maximum throughput was about 18MB/s for the 10 VMs. I'll do a new log tonight, for 24hrs which also measures Split IO.
 
Yeah 2 of them. They are new, nothing yet on them, we have configured both sets of 24 as one big pool, I think its Raid6+1, as that's what they recommended for a good balance of performance + redundacy.

We will definately be limitied by the 4 x 1GB ports, but the switch we bought has a couple of 10Gbe ports, so we can do a quick upgrade on the back of the NAS if needs be.

It has a couple of read SSDs for buffering and 16GB of RAM. Not played that much with it, as we are still awaiting the proper switches to be delivered.
 
Its set for `Double parity RAID` I didnt see any mention of Raid Z or Z2 etc when configuring.
 
Boss wont like that.

He bought this because of the £/MB. The intial plan I had was to get another SAS FC array for the Virtual Machine clustering, but for the price of 15TB, he managed to get 48TB of Sun Storage.

If it doesnt work properly then so be it...
 
Back
Top Bottom