Exchange 2003 on ESXi 4.1

Soldato
Joined
30 Sep 2005
Posts
16,720
Hi all,

We are migrating our exchange 2003 physical server (1,000 mailboxes, 500GB data across multiple stores) to VM Land.

The host is a Dell R710 Dual Quad Core Xeon sat on a Fibre SAN

Now, this server is only going to be temporary (six months ish) while we work on building another exchange solution to host numerous companies across the UK. Anyway, we are working on the temp one for now as the physical server is in a right state and it needs doing ASAP. We are not going down to P2V route, this will be added into the existing org and we will migrate the mailboxes across.

I've read the white paper from VMware and it sounds pretty straight forward to me. Just wondering if anyone had an immediate thoughts.

Obviously RAM is going to have to be 4GB since its Exchange 2003 but does anyone have an ideas about setting up the SAN or CPU resources.

Thanks very much,
Glen.
p.s. It will be on Windows Server 2003
 
SAN is going to be the biggest problem - it needs to be as fast as possible and having a hypervisor in the way doesn't help. You could use Raw Device Mappings instead which would boost performance.

You may want to factor in another host so you can have high availability (obviously a cost associated with this) and another factor is how you're backing it up but that's about it.


M.
 
add one vcpu to the exchange server first as adding a second can actually degrade performance due to scheduling etc, see how you get on then if nessessary add a second.

make sure to build plenty of redundancy into your esx and san design!
 
Thanks very much for your quick replies guys,

I'm quite new to this (VMware and SANs), could someone quickly explain to me about vCPUs. I've read the VM paper twice today and still can't get my head around it. I know that with 1,000 mailboxes I only need one CPU but that's about it. Thanks!!!

p.s. I think a vCPU is like a CPU to the VM but ESXi allocates it resources from the actual CPUs. is that right? :confused:
 
Im no master either, just picking up things from my work!

From my understanding if you assign 2 vCpu's then the server will have to wait for two physical cores/cpu's until it processes the request - which in turn could slow down performance if this is needed!
 
Im no master either, just picking up things from my work!

From my understanding if you assign 2 vCpu's then the server will have to wait for two physical cores/cpu's until it processes the request - which in turn could slow down performance if this is needed!

You're pretty much there, if you assign a single vCPU that is hardly utilised then you will potentially suffer degradation if you add a second CPU (generally if its a well subscribed host)... however, after saying that I would generally assign 2 vCPU's to critical servers to avoid errant processes locking up the CPU time. Is the host server part of a cluster? What other services will it be providing?

Your storage best practices in the physical world should translate through to the virtual world. Same goes for Exchange best practices.

You need to decide what features you require from VMWare? Do you want to utilise the features that storing your data on VMDK's provide? How will you be performing backups?

... finally, can you move to exch 2010 ?

Additionally, getting some perfmon stats from your current setup would prove helpful when resourcing your new box.
 
Last edited:
I've just done a quick bit of reading, and with the release of 4.1 apparently the CPU scheduling has improved, so perhaps the advice of sticking to a single vCPU is now devoid. In that case, I'd reccomend going with 2 as a starting point, and monitoring performance accordingly...!
 
SAN is going to be the biggest problem - it needs to be as fast as possible and having a hypervisor in the way doesn't help. You could use Raw Device Mappings instead which would boost performance.

Not true. RDMs don't offer any real performance boost, except in very specific circumstances. VMDKs within VMFS datastores offer pretty much the same performance without any of the negatives of RDMs. Have a read of this page for example:

http://www.vfrank.org/2011/03/22/performance-rdm-vs-vmfs/

Im no master either, just picking up things from my work!

From my understanding if you assign 2 vCpu's then the server will have to wait for two physical cores/cpu's until it processes the request - which in turn could slow down performance if this is needed!

In vSphere the CPU scheduler is improved to the point that in the vast majority of cases, more vCPUs will equal better performance. That said, as best practice it's better to start with one vCPU and monitor initial performance. If it's deemed to be unsatisfactory, then add a 2nd, and so on.


ah ok, thanks

and last question...what about the LUNs :eek::D

I'm reading different things about the LUNs for OS, pagefile, exchange, database and logs

I'm unsure of best practice with regards to Exchange, but in general, for performance of database servers, you'll want your OS and application volumes on a RAID 5 LUN, your DB on a separate RAID 10 LUN, and your logs on another RAID 10 LUN. This gives maximum performance (whilst retaining redundancy) to your DB and logs whilst not using excess disk for the OS and apps.

The key thing to remember with this setup though is when it comes to snapshots, the differencing files won't be created on the respective datastores, they'll all be created on whichever datastore the VM config files reside. This can lead to datastores filling up if the snapshots are kept any length of time and you aren't aware of the issue.
 
The recommended LUN size seems to be 500GB that wouldn't cover the O/S and the database hence the recommendation for RDM's.


M.

So what about a 500GB LUN for the OS application, RAID 5, possibly shared between several servers, and a couple of dedicated RAID 10 LUNs for the DB and logs? Yes you have the administrative overhead when it comes to snapshots (and snapshot based backups), but you get max performance.
 
I wouldn't touch RAID5 at all in an exchange setup. Local Volumes which should be your binaries and OS should really sit on RAID1. Everything else on a SAN (although most SANs these days will stick the volume on the correct disk tray based upon the Iops it's taking anyway if you get the option it's worth setting RAID10 as the preffered RAID type).

Your local drives will have your OS and pagefile on, pagefile will hate RAID5 because of the write latency, same reason exchange IS hate it.
 
As stated you need seperate LUN's for DB and Logs, we use RAID 1 for our O/S and RAID 10 for the DB/Logs. O/S and application drives won't need much space, 500GB for this would be overkill imo.

For your disks you will want fast disks so probably 15k RPM drives if you can get them, then enough of those to cover the IOPS you need for your database/log LUN's whilst keeping your disk sizes high enough.
 
Thanks very much guys,

would you recommend RAID-10 array for the DB and a seperate RAID-10 for the logs?

Depends how you're doing it. The key point is you want RAID10 with separate I/O queues so that logs don't back up the I/O queue for the DB and vice versa.

So for local controller arrays or NAS (where spindle numbers are low), separate arrays is the best way to go.

IF you're using a SAN then the SAN can be just one big RAID10 array as usually you're looking at 14+ drives and it can handle it all on one and generally the SAN will move volumes around within itself to aggregate load. So just let it do what it does best, it'll sort out managing the I/O paths. iSCSI/FC volumes show up, and are treated as separate disks, with separate I/O queues locally within your OS/hypervisor which bungs the I/O off your sever/VM and lets the SAN worry about it.
 
I remember reading a whitepaper which said something along the lines of, i/o cost decreased by 70% when going from 2003 to 2007 and another 70% from 2007 to 2010. In the real world, I don't know how applicable that is but it might be worth considering before splunking phat moola on a temporary SAN.
 
I remember reading a whitepaper which said something along the lines of, i/o cost decreased by 70% when going from 2003 to 2007 and another 70% from 2007 to 2010. In the real world, I don't know how applicable that is but it might be worth considering before splunking phat moola on a temporary SAN.

Cant remember on 2003 -->2007 but, ive read similar on 2007 -> 2010 drops I/O by 70%
 
s0ck is quite right. If there's a reasonable expectation and a timeline for migration to Exchange 2010, you should probably work this transition into that plan.

You should have some IOP numbers to hand, since you're already running the system. That will give you a good indication of what you need to provide for the virtualised environment.
For 2003 you'll probably end up with a SAN full of 73GB / 146GB drives, just to meet the IOP requirements.
For 2010 the difference is at least 90% less IOPs, letting you use fewer, larger drives.
 
Back
Top Bottom