Virtualisation

Soldato
Joined
17 Oct 2002
Posts
2,956
Location
Cold Scotland :(
So, who is using/is going to be using virtualisation? And if you do, what form is it in? VMWare ESX, MS Virtual Server etc.

Currently building a test bed system for Windows Server 2008, looking to the future for Hyper-V and the possibility of us using some form of virtualisation for our smaller applications.

Would be interested to hear in your experiences, and hardware specifications you have used. :)

Cheers
 
The usual enterprise level really, HP Blades for processing, Netapp/EMC for Storage (still evaluating for full rollout) and ESX to tie it all together...

Each blade is 2x Quad Core 2.83, 16GB RAM, 2x 15k drives for boot up. Initially we're running with a full C7000 enclosure (16 blades). Full production would likely be 6 enclosures per site (2 primary, 1 DR) for a total of 96 physical servers at each and around 110 virtual machines running at each site.

We're currently struggling with power at our datacenters, problem is a full bladecenter can draw something like 40A according to HP and we're considing 3 per cabinet for 120A possible power draw, thats more than we've allowed for 4 racks traditionally so there are going to be lots of empty footprints...
 
Last edited:
Use VI3.5 for the production system running on DL380G5 or BL460cs Quad Cores 16GB ram. Blades sometimes don't prove cost effective due to hosting restrictions at the data centres on both amp draw and heat generation.

Storage HP Eva or NetApp FAS boxes, disk type depending on the function being hosted, fibre attached.

Backup, monitoring and reporting by using Veeam.

Depending on what things you want out of Virtualisation, such as the vmotion, HA, DRS etc its worth having a look at XenServer by Citrix as its significantly cheaper per box than VMware and can provide the majority of the functionality. These functions can be added thou in quite a nice way by using http://www.platform.com/Products/platform-vm-orchestrator
 
Last edited:
What's veeam like, it's on my task list to call them sometime?

I know what you mean about heat and power on blades, but we're just adapting our datacenters to suit. It means we have a couple of places with huge rooms with just 2 widely spaced rows of racks in and a desk sitting facing them so we have enough power (power is generally per footprint for us). Looks fantastically space age though....

We looked at Xen but it wasn't ready for production when we spoke to them really and money isn't the concern...
 
Veeam is to put it mildly bloody brilliant. The management suite is very good, doing both monitoring and reporting and has a couple of other bits chucked in. The backup plugs into VCB but does some very clever stuff around incremental and full backups. It also has some nice replication function with it which give a good way of doing DR between SANs etc. What adds to the attraction thou is that its really really cheap. If you want a decent bunch of folks to help get it installed and configured chuck me a mail and I'll point you in the right direction.

On the blades it depends on who hosts the racks, but we have some sites which can take only 2000BTU per rack (average) and max of 3000 per individual, or 16amps per rack. This obviously limits our options quiet a bit.
 
Heh, some nice set-ups there guys :)

We aren't big enough (yet) to be considering Blades (And we certainly don't have the power capacity). We don't have a SAN so that won't work either :D

I am mainly looking at Virtualisation for cutting down on the amount of hardware we would have, as we do have a lot of small apps sitting on a lot DL320 class servers. Obviously after you get to a certain point you would be benefiting from reduced power and heat requirements. Oh, and the Carbon Footprint people are hovering closely, what with being a Local Authority :p

We currently have a 6 server Citrix farm, so I will have a look at XenServer. Ta
 
To be honest server virtualisation really benefits from having shared storage, be it an iSCSI or Fibre Channel as you get the benefit or high availability and the vmotion type technologies as in effect what you are doing if you virtualise is putting a lot of eggs into 1 (server shaped) basket.

If you have a small scale system try virtual iron. Its based on the Xen hypervisor and has some of the fun features that make virtualising so good and is also pretty cheap (cheaper than Xen Server)
 
Cheers J1nxy, will give that a look too!

I appreciate, and agree, that shared storage is the best way for virtualisation. I would invisage that a SAN is not far off our radar and within the next few years I could see us having one.

The apps that I am looking at the moment are mainly our management tools (Insight Manager, WSUS, Anti-Virus, etc). So a fairly low key and something from which I can recover if something goes wrong.
 
Heh, some nice set-ups there guys :)

We aren't big enough (yet) to be considering Blades (And we certainly don't have the power capacity). We don't have a SAN so that won't work either :D

Blades can be more cost effective from 2 units upwards depending on the network and san connectivity. Also, you could try looking at the C3000 "shorty" enclosure which will run off standard mains wall plugs for the tower enclosure :)

If you are using Insight manager, I would recommend getting hold of Insight Control Environment and having a play with that. There is a trial version on the HP site somewhere.
 
Running 2 DL385's with ESX server. Replaces a Blade setup which was only good for converting electricity into heat and noise.
Using ESX to run multiple instances of citrix servers for software development as well as windows servers for Com objects. Made the software testing a lot easier.
 
Running 2 DL385's with ESX server. Replaces a Blade setup which was only good for converting electricity into heat and noise.
Using ESX to run multiple instances of citrix servers for software development as well as windows servers for Com objects. Made the software testing a lot easier.

Umm, blades are far more efficient when utilised properly...
 
Umm, blades are far more efficient when utilised properly...

Granted Blades are usefull and it served us well for 18 months, but under vmware we can install a clean vm or what we did was do a p2v of a citrix server then clone that vm. Snapshot the vm prior to installing our software to be tested. Once testing is complete, restore the clean snapshot and away we go with the next version. Lot easier than the blade set up we had and when we put the blades in ESX was pretty expensive, so we went blade.
 
Granted Blades are usefull and it served us well for 18 months, but under vmware we can install a clean vm or what we did was do a p2v of a citrix server then clone that vm. Snapshot the vm prior to installing our software to be tested. Once testing is complete, restore the clean snapshot and away we go with the next version. Lot easier than the blade set up we had and when we put the blades in ESX was pretty expensive, so we went blade.

So why not ESX on blade? It's the most obvious use of blade servers by far, it's the most easily managed environment I've ever used, if I want to add another server to the resource pool then it's literally 45 mintues from the box arriving to being in production in ESX.
 
So why not ESX on blade? It's the most obvious use of blade servers by far, it's the most easily managed environment I've ever used, if I want to add another server to the resource pool then it's literally 45 mintues from the box arriving to being in production in ESX.


Many reasons why, The P series Blades we used, went end of life very quickly and they werent connected to a SAN, just used internal SAS drives. It was a stop gap solution to replacing GSX server which just about coped till something better came along, which was cheaper ESX server.
Plus in our old offices the A/C was struggling a bit due to location of the server room and the Blades just pumped out heat. The day we shut em down, the A/C breathed a sigh of relief.
 
Doesn't the licensing costs go through the roof when using blade servers though ? Might be a mis-understanding from me, but I thought the idea of blades was that they were less powerful than a large server, but you could fit more blades in the same space and they would cost less as well. If this is the case, I would have thought the costs of ESX licenses would be huge.

The virtual environment I implemented consisted of 3 x Dell 6850's and an EMC CX300 fibre attached SAN. The idea of blades was there, but lack of experience with them caused me to go down the large server route.
 
Some of the blades can scale (BL680c for example) to 4 Quad's and 64GB RAM. The restriction on blades is almost always DIMM and I/O (fibre HBA's, NIC's) slots. Licensing wise however it remains the same. It just depends on whether you go rack mounted vs blades and scale up vs scale out.

Ahhh. So really the main benefit of the blade environment is just about fitting more of them into one area ? Good to know that.

Current place I'm at will see a deployment of 'big boxes' (Dell R900's) rather than blades...

There any major differences between the D900's and the 6850s ? Been a good few months since I've dealt with Dell kit so I've not really been keeping up to date with it.
 
Sorry didn't meant that to sound like I was having a go at blades. I had just never really looked into them enough to know what the benefits are.

I would still like to have a play with a chassis though (or at least see a picture of the back of one*), just so I can get a better idea about them.


*hint hint
 
Back
Top Bottom