Dell/Brocade/EMC for VMware and Stroage

Associate
Joined
13 Apr 2007
Posts
961
Location
Belfast, Northern Ireland
I'm currently at the early stages of planning a project to migrate around 30 physical servers (about 20 running small MS SQL databases, the remainder running IIS) to a VMWare environment.

The overall programme has a strategic alignment with Dell and therefore I am likely to be strongly encouraged down the Dell/Brocade/EMC route. Anyone have any strong opinions either way on this? Any real reason to fight this and head in a different direction?

I'm trying to look forward and consider consolidating one or two other small projects into this infrastructure but it will never be larger than 40 servers with a useable storage requirement of 10TB max. At the moment the (Dell) system folks are telling me I can do all of this with a handful of beefy 2950s, a small SAN and a couple of Brocade switches. They had initally tried to spec blades but I steered them away from that thinking Dell blades were poor cousins to HP or IBM - am I wrong?

We will be using some of the WMware enterprise features like HA and VMotion but I can't really see us doing anything too cutting edge with this.
 
Hi there,

Its difficult to recommend anything cast iron but here are some broad recommendations:

1. Performance profile the environment you are targeting for VMWARE. If you are comfortable with sending performance data offsite Dell/VMWARE will offer you a "VRA" (Virtualisation readyness assesment) report for free, if you are serious about deployment.

2. Looking at your requirements deployment of a non-fibre channel based storage system may actually be more beneficial to you. The Dell Equalogic based arrays are highly regarded and targeted directly into your market. As your Dell rep about their Equallogic ISCSI solutions for VMWARE.

3. Keep in mind that any migration to a SAN system may impact some types of SQL based environments due to the way SAN systems cache disk activity. Expect an slight decrease in performance for databases that perform lots of small reads and writes. Dont let this put you off, just be aware of it.

4. Be *sure* of your high-availability requirements from the business. Its ok to wade in and go for a ESX Enterprise setup with VMOTION and Fibre Channel arrays, but do you really need it? If you wish to tip your toe into the waters, you could get up and running with an ESXi setup to perform a proof of concept and some basic performance testing.

5. In reference to #4, your availability and testing requirements will mushroom your server and storage bill of materials if you are not careful. If you budget for a 3-node production vmware esx cluster, make sure you budget for a DR, UAT and Development clusters, depending on the requirements from your developers. It can get expensive if your business heads and developers have high expectations.

6. Make sure you know what your acceptable consolidation ratios are. In some instances this is not just dictated by the technical guys (us) but can be dictated based on risk appetite rather than technical capabilities of the hardware and software.

7. If your consolidation ratio is lower than 12:1 your return on investment calculations may be busted up as typically its only at this ratio that you start to make head-way against physical boxes.

8. The 2950 platform is rock solid and scalable, talk to your dell rep about the latest 6-core processors.

9. We have just completed a proof of concept on the latest dell blades. The results are that they are on now par technically with the HP and IBM systems. However, if you use them for VMWARE be aware of the risks of placing too many VMs in one place (or one rack) and what happens to the business if you do? for smaller deployments like yours, 2950's might be a better bet.

Cheers,
Carl.
 
Remember you need n+1 capacity for HA, meaning if one host goes off you must have the spare capacity on the remaining hosts to carry the workload.

Amazing how many people don't do that.

Also make sure all the boxes in the cluster have the same CPU and memory size.

Also consider that your backup strategy may want to change now you can backup the VM's from ESX rather than just streaming from the server....
 
Thanks for the advice guys.

I had considered a good portion of what you have each said but thanks for the extra little bits of wisdom.

Thankfully we have a good relationship with VMware (global Enterprise partner) so we get decent discount on their software for inhouse usage (read free atm).

I've been speaking to some guys in the wider business here who have deployed a couple of small SANs and VM infrastructures for other projects and should be able to tap in to their experience.

This project looks like it might become quite strategic and touch a range of services we offer so it should get the support and attention it deserves. Rather than having to design and build it, I get to play the internal customer (albeit a picky, fussy grumpy old sod kind of customer).

I'll be meeting with the various Test and Development teams soon to see if they have the appetite and funds to come on board although I may just end up having something built with them in mind rather than ready for them at day one.

I assume that as long as I size the fabric and storage correctly I can expand to provide them services when I need to (i.e. I can just buy some more servers and software when they are ready).
 
Let me know if you want any EMC whitepapers on VMWare & EMC (Clariion I assume).

Let me know if you have any specific EMC questions, though i've not had much time to look in detail at Clariion & VMware deployments :)

On the blade subject I find most people go with HP. I hear Dell's consume more power than the sun.
 
Can't really help with details, we've run a few trials and it looks like we're settling on Netapp and HP blades, we spent some time looking at EMC but at the time (prior to the CX4 annoucement) they didn't have anything special to offer and Netapp seemed more willing to deal.

Personally I think you'd be mad to virtualise on anything other than blades, they're the perfect platform for it unless you're ultra small (in which case you have to wonder if it's worth virtualising at all).

Having spent far too much time in the last couple of years fighting with iSCSI I'd strongly recommend you go for fibre channel SAN connectivity if possible. iSCSI looks great but performance and reliablity has been a real pain in our experience, fibre channel is expensive but proven technology.
 
Back
Top Bottom