Production SQL on VMWare

PAz

PAz

Soldato
Joined
18 Oct 2002
Posts
6,566
Location
Beds
Is anyone doing this?

If so, how big are the SQL boxes you are running on VMWare?

I've played with it in the past but never got the same level of performance as with a dedicated physical box. Our VMWare estate is easily powerful enough to handle it.
 
I am running a W2k8 vm (SQL 2008) running our main production database (around 5gb itself in size), along with a few smaller databases on our vSphere (Esxi) setup from an iScsi SAN and it absolutely rockets along. Hosts being used are Dell R710 with 2 x X5570 CPU.

Migrated it from a physical box, W2k3 / 2005 SQL which was slowly falling over and using very high CPU usage. No issues at all since virtualising it.
 
The most important factor here will be disk speed/IOPS capability in your SAN/NAS rather than the processing power and RAM in the Cluster (yes they are a factor just not the most important one).

If you have small databases with minimal R/W operations then you should be OK, otherwise you will need to seriously look at the storage implications.
 
Storage is a factor but so is CPU availability.

VMWare have a paper out on factors affecting SQL performance inside VMWare, a lot of it is aimed at the VSMP side of things.

Basically, your results will depend on exactly what sort of stuff you are doing with your databases and what else is running alongside that workload on the host machine.
 
Absolutely agree with brainchylde and DRZ. In theory there is nothing wrong with it. We have several SQL databases virtualised across multiple hosts and there is no problem. We have a very low requirement compared to some, however.

There are several options if you are on the heavier side of things. Picking the correct storage/connectivity would be the way to investigate. There are quite a few best practice literature papers available from VMware and partners on how to setup the best possible environment for intensive DB operational activity. Covering recommendations on hardware, configuration and on going administration.

Worth a look, depending on your requirements that is!
 
Thanks for the replies. As I said I know our hardware can handle it; both SAN and the host servers.

The problem is exactly as you pointed out Knubje - in theory its fine but I'm really struggling to find any real examples of organisations running intensive SQL boxes on VMWare in a shared environment.

I run dev SQL on VMWare and its absolutely fine, its just the jump to prod thats always scary.
 
We have huge production MS and Oracle DB's running SQL on VMware. I'm talking in the region of 50/60TB.

As most have said already it all comes down to IOPS and CPU cycles / availability. We do throw hardware at it and our server hardware is dedicated to SQL running on ESXi. It probably helps too that the backend storage is running out of FlashCache (NetApp) for the most highly utilised DB's.

Essentially you would need to know the characteristics of the DB in question, is it random small size KB I/O or big sequential read / writes, etc. Once you've got that trended you can work out fairly quickly the IOPS required on the backend.

Cool! So you've dedicated hosts and SAN purely for virtualised SQL?
 
People keep talking about IOPs but it really doesn't make any difference to if you were running without a hypervisor. Of course the disks need to be specced accordingly but VMFS doesn't add any noticeable overhead. IMO Disks are not the issue for virtualised database servers, its the cpu and the ram limitations that will kick in first.

Edit -

Not to mention the limitations of VMFS.

• 1MB block size – 256GB maximum file size
• 2MB block size – 512GB maximum file size
• 4MB block size – 1024GB maximum file size
• 8MB block size – 2048GB maximum file size

So you may want to consider a raw device map and then there's pretty much zero IO overhead compared to a physical counter part.
 
Last edited:
The problem is exactly as you pointed out Knubje - in theory its fine but I'm really struggling to find any real examples of organisations running intensive SQL boxes on VMWare in a shared environment.

I run dev SQL on VMWare and its absolutely fine, its just the jump to prod thats always scary.

The thing is most intensive SQL boxes will be critical to the business running them so they will have little benefit in going to shared infrastructure. Unless the virtual enviroment is a higher spec than what it is currently running on. If its intensive then SQL server running on the same hardware but dedicated without virtulisation will be faster. If something is intensive why would you want it to share resources with other programs?

I suppose some arguments for DR/BC etc but with SQL server mirroring etc its becoming less so.

You can always run SQLIO on the kit to see what its capable of.
 
People keep talking about IOPs but it really doesn't make any difference to if you were running without a hypervisor. Of course the disks need to be specced accordingly but VMFS doesn't add any noticeable overhead. IMO Disks are not the issue for virtualised database servers, its the cpu and the ram limitations that will kick in first.

Edit -

Not to mention the limitations of VMFS.

• 1MB block size – 256GB maximum file size
• 2MB block size – 512GB maximum file size
• 4MB block size – 1024GB maximum file size
• 8MB block size – 2048GB maximum file size

So you may want to consider a raw device map and then there's pretty much zero IO overhead compared to a physical counter part.

We weren't talking about the overhead. Rather, we were talking about the fact that VMWare usually runs on shared storage. There would be little use in virtualising if you were not taking advantage of the cluster and HA. RDM is of course a good idea, but the IOPS are still one of the most important considerations because the storage is likely to be shared.
 
We weren't talking about the overhead. Rather, we were talking about the fact that VMWare usually runs on shared storage. There would be little use in virtualising if you were not taking advantage of the cluster and HA. RDM is of course a good idea, but the IOPS are still one of the most important considerations because the storage is likely to be shared.

Perhaps im not clear in my post, what i tried to poorly explain is that you should always spec your disks accordly, with or without virtulisation. You always have the option to improve disk performance, raid config, spindle count etc... But you cant get past certain vm related restrictions. One being vcpu count, and vmdk size.
 
Depends on the number of server instances on the box and the SAN tech.
You can avoid VMFS and limitations on VMDK size by connecting to the SAN using software iSCSI initiators. So the SAN volume mounts directly into the guest OS across the nework without having to pass the filesystem through the hypervisor. However a downside with software initiators is they do incurr a slight CPU overhead per VM so it's not usually advisable to run too many on the same cluster as that CPU overhead gets multiplied. Unless you have a very big cluster with bucketloads of spare CPU capacity.
TCP/IP Offload support on NICs reduces the CPU overhead but doesn't get rid of it completely.

Tests I've run show quite promising results using software initiators in VMs, but as I said, it's situation dependant.
 
Back
Top Bottom