HP Proliant DL580 G5 w/VmWare

Soldato
Joined
17 Oct 2002
Posts
3,941
Location
West Midlands
Greetings, we have an upcoming project which will require the consolidation of two existing servers one being a AD/File/Print and Exchange Server the other being an SQL Server, in addition to this we are looking to provision two Citrix servers hosting 20 clients each. Overall there will be approx 150 network users.

Im currently looking at the DL580 G5 for the project but it has been a couple of years since working with the 500 range and id like to check the details below if someone has had experience with this model.

Server Specification

4 x Quad-Core Intel® Xeon® E7430 Processor (2.13 GHz, 12MB cache, 90 Watts)
16Gb PC2-5300 Memory Configured For Advanced ECC
16 x 146Gb SAS 10K Hard Disks
2 x P400i SAS Controllers w/512 bbwc
Dual Power Supplies
1 x NC382T Dual Port Gigabit Network Card in Addition to two on-board Nic's


In terms of storage i want to connect one of the backplanes to the first P400i controller and the second backplane to the additional P400i controller, i haven’t yet check whether you can split the backplane on this model but i don’t see why it wouldn’t be possible with the concept of internal storage boxes that HP refers to nowadays.


Im looking at provisioning four servers as below:

1st VM - Windows 2003 AD, File and Print
2nd VM - Exchange 2007 and SQL 2005
3rd VM - Citrix Presentation Server
4th VM - Citrix Presentation Server


Im looking at provisioning the storage as below:

First eight drives connected P400i controller configured in Raid 10, Four logical drives created out of this array to act of OS partitions for the four servers each being approx 146Gb each.

Second set of eight drives connected to second P400i Controller, out of this i would create two Raid 5 arrays with 3 Disks in each leaving two as global hot spares. This would give approx two 250Gb logical drives/partitions for both the SQL/Exchange Server and the File/Print server.

Im looking at provisioning the network connectivity as below:

There will be a total of four 1Gbit Nics available, each one will be dedicated to a VM.


Any comments/suggestions welcome, the other scenarios ive been playing with are the C Class Blade system or individual servers. I will also need to find an appropriate backup solution that can either be run of one of the VmWare servers or as a seperate entity.

Regards
 
What virtualisation software you going to be using ?

Don't consolidate all those workloads into one physical host. I know you can do it, but it just sends a shiver down my spine. All it takes is one motherboard to go on the fritz and the whole lot is off.
 
What virtualisation software you going to be using ?

Don't consolidate all those workloads into one physical host. I know you can do it, but it just sends a shiver down my spine. All it takes is one motherboard to go on the fritz and the whole lot is off.


Indeed i can understand your concerns it's principally why im looking at the 500 Series and will have a 24/7/4 Care Pack, im willing to look at alternatives though.

It will be running VmWare ESXi but it dependant on whether the free license covers all the sockets in the server otherwise it will be full blown ESX.
 
Indeed i can understand your concerns it's principally why im looking at the 500 Series and will have a 24/7/4 Care Pack, im willing to look at alternatives though.

It will be running VmWare ESXi but it dependant on whether the free license covers all the sockets in the server otherwise it will be full blown ESX.

How about 2 DL360's with 8Gb each ? Even with ESXi you can just licence it for HA, and have that extra piece of mind of not having all the eggs in one basket.

edit *** although with HA you'll need some form of shared storage, even NFS will do the trick...
 
How about 2 DL360's with 8Gb each ? You are going with a SAN storage solution anway. Even with ESXi you can just licence it for HA, and have that extra piece of mind of not having all the eggs in one basket.

Understood, i was trying to avoid using a SAN due to the fact that the DL580 supports 16 internal drives.

Going with two servers and an MSA Storage box is a good alternative though. I can drop two VM's on each server, HA shouldn't be needed.
 
One question

What are you using to virtualise the boxes?

Next thing, 1 box is all your servers in 1 basket, server fails everything goes. Personally get yourself a couple of DL380s and put either a FC, iSCSI or SAS array behind them, maybe even 2. C-Class blades work out more cost effective at around the 7-8 servers (depending on spec) in a chassis point, below that it cheaper to get single boxes.

Don't put SQL and Exchange on the same hardware or vm and same drive arrays. They are both I/O heavy apps so will effect each other.

Network config will depend on Hypervisor but you don't normally dedicate a NIC to a VM, you assign NICs to virtual networks. You'll also need a NIC for management.

Backups depends on the Hypervisor your running.

There you go for starters...
 
The thought of consolidating so many core services into one box, especially exchange sends shivers down my spine. The 'eggs in one basket' analogy comes to mind.

Personally i'd sit exchange on it's on and have it do nothing but exchange/replication. The thought of losing mail alone nevermind the other services...well...i'd may as well grab my coat and walk. Just wouldn't be acceptable.

Like J1nxy said, G4's are cheap now (and still very good workhorses), grab yourself a couple of those at least. SQL/ERP apps on one, exchange on the other and virtualise everything else if need be.
 
Last edited:
Ok somebody posted while I was typing ;)

Free ESXi is not designed for what you are doing, spend some money and buy ESXi with addons, Xen or even Virtual Iron. Virtual Iron on the scale your looking at is probably the most cost effective and includes most of the clever stuff that ESX does but chrages for.

DL380s with 16GB is better, and not much more expensive, then you have the capacity on 1 server to pickup the load from both, giving you maiteance windows etc. If you use a small disk array like the 22xx series MSAs they won't get immense throughput but add a degree of flexibility.
 
All good advice guys, ill go with the general perception that a single server is a bad idea and that splitting the Vm's between the two would be a better method, i haven't had experience of ISCI so id prefer FC.
 
FC is pretty expensive, good but expensive. Anywhere up to £800 for 8Gb HBA's etc etc

If you were considering everything on one server and the load would have been OK for that i'd say you were in the iSCSI segment of the market tbh.
 
So a quick revision i would be looking at something resembling the following.

2 x DL380 G5's or DL360 G5's w/2 x Quad Core Processors, w/8Gb Memory, w/4Gb FC Adapters w/2-4 Hard Disks in Raid 1+0 each running two Vm's

Single HP StorageWorks 2000fc Modular Smart Array for Shared Storage between all Vm's
 
Looks like you've got the idea nailed now. I'm not really clued up on VMware, but just some thoughts....

I'd go for 2 beefy DL380's with the shared/global storage behind (with the VM's on this). So you have the same volume(s) presented to both servers, which is good if you want to use V-Motion. You want to also size virtual machines appropriately so that if you have to fail-over VM's from one box, all 4 will still run on a single host (albeit with degraded performance).

Going to go for ESX 3.5 rather than ESXi? Does the VMware clustering cost extra? (I'm guessing it does, just not sure how much by.. ?). V-Motion?

Are AMD processors still preferred for VMware? That's what I've heard....

Still, all this depends on cost.
 
Your storage in the servers is purely for the hypervisor and local swap space not the VMs. I'd go with the 380s as they can take more cards. I'd also put 16GB of ram in them.

You could use the the SAS attached version of the MSA2000 might be cheaper than the FC version.
 
Looks like you've got the idea nailed now. I'm not really clued up on VMware, but just some thoughts....

I'd go for 2 beefy DL380's with the shared/global storage behind (with the VM's on this). So you have the same volume(s) presented to both servers, which is good if you want to use V-Motion. You want to also size virtual machines appropriately so that if you have to fail-over VM's from one box, all 4 will still run on a single host (albeit with degraded performance).

Going to go for ESX 3.5 rather than ESXi? Does the VMware clustering cost extra? (I'm guessing it does, just not sure how much by.. ?). V-Motion?

Are AMD processors still preferred for VMware? That's what I've heard....

Still, all this depends on cost.

Vmotion is a costed option on ESX3.5 or ESXi.

I'd still recommend having a look at Virtual Iron as its is significantly cheaper and also does everything you want with shared storage etc out the box
 
Sorry one more thing, if you go down the VMware route you'll also need a Virtual Centre license and all of them need a server to run as the management console (this can be a VM)
 
You can actually buy a package that includes ESX 3.5 / VMotion / VCB (VMWare Consolidated Backup) and Virtual Centre. It's called something like VMWare Infrastructure 3.

If you are using VMWare I would suggest putting it on a SAN and then clustering the VMWare servers into HA mode. That way it will even the servers out amongst the blades and if one goes down it will bring it up on another. You need them all to have shared access to the storage though (hence a SAN).

As long as you but the infrastructure module HA doesn't cost more as well.



M.
 
You can actually buy a package that includes ESX 3.5 / VMotion / VCB (VMWare Consolidated Backup) and Virtual Centre. It's called something like VMWare Infrastructure 3.

If you are using VMWare I would suggest putting it on a SAN and then clustering the VMWare servers into HA mode. That way it will even the servers out amongst the blades and if one goes down it will bring it up on another. You need them all to have shared access to the storage though (hence a SAN).

As long as you but the infrastructure module HA doesn't cost more as well.



M.

The VI3 license includes the VCB/HA/Vmotion and the license to be managed by a Virtual Centre, not Virtual Centre itself. That is a separate server license.

what your talking about with HA isn't HA its DRS.
 
I can recommend the IBM DS3300 ISCSI box used with 380G5's, put it into a couple of our offices and had no issues. I'd also go with the shared storage + 2 servers to allow for maintenance / hardware failures. Perhaps also look at Xtravirt's free replication software. Not fully enterprise but might do the trick depending on what you need
 
The VI3 license includes the VCB/HA/Vmotion and the license to be managed by a Virtual Centre, not Virtual Centre itself. That is a separate server license.

what your talking about with HA isn't HA its DRS.

http://www.vmware.com/pdf/ha_datasheet.pdf

DRS does control the HA side of things - however its still classed as HA as the datasheet above will show you.

You get the VC licence when you buy the infrastructure product. I know this because we've just bought it! Worth a look IMHO.



M.
 
Back
Top Bottom