Best disk structure for virtualisation

Associate
Joined
20 Oct 2002
Posts
1,127
Location
Redcar
Hi

We’re upgrading some of our oldest kit into a virtualised environment with Microsoft Hyper-V. The old servers all have two logical disks. The system disks are all two disks in RAID1. The logical data disks are a mix of RAID5 and RAID1 again.

The new server can have 4x 2.5 local disks. The rest of the data storage will be going in our expanded SAN.
Now we are consolidating four machines into one I was thinking what is the best disk sub structure for the virtual hard disk images to run off, RAID10? This would be created directly on the hardware running the VMs. I was also thinking of creating a six disk array of RAID10 on the SAN for the entire VM data store as the data is mainly SQL databases with some light access web sites.

Can anybody offer any comments on the above setup? Total number of disks and setup is still flexible at this point.

thanks
 
Personally if you're storing everything on the SAN I would just have 2 disks mirrored. The installation doesn't take up that much space (2x72gb HD's would leave 60gb free for logs, etc.) so you'll have more than enough.

Obviously if one goes down you continue on the other - but it would be wise to have a spare hard-drive to hand while you wait for it to be replaced.



M.
 
That’s the situation we have now with most of the machines just running RAID1, and it's lasted us well and we have good procedures in place for the occasional disk failure.
As I have the opportunity with new kit would it be advisable from a cost vs. performance issue to move to RAID 10 for the increased write speeds that I would get. I know that the dev team always like a performance increase.
Other than RAID 10 what other options do I really have, two RAID 5 arrays striped RAID 50?
 
Well all that the server will be hosting is the operating system for the HyperV. It won't be doing any work as all that will be done from the SAN (so there's no real boost there other than loading the HyperV but that won't really make much difference speed wise and personally I don't think it would be worth the additional cost).



M.
 
Why Hyper-V rather than the more capable and proven VMWare?
We're Microsoft everything, nobody here knows VMware and I assume we get a good deal on Server 2008 products, I just know it's what we'll be using.

I've been looking for some good benchmarks of the real world benefit of RAID10 over RAID1. I need to explain in detail why i want 4 more disks, damn credit crunch.
 
We're Microsoft everything, nobody here knows VMware and I assume we get a good deal on Server 2008 products, I just know it's what we'll be using.

I've been looking for some good benchmarks of the real world benefit of RAID10 over RAID1. I need to explain in detail why i want 4 more disks, damn credit crunch.

VMWare is far superior to Hyper-V in it's current form. Having trialed both I can tell you that if you need to do serious Virualisation then VMWare is the way to go.

Just because you are a "Microsoft" house doesn't mean you should automatically rule out non-MS products especially if one happens to be the market leader.

http://www.itcomparison.com/Virtualization/MShypervvsvi35/HyperVvsvmware35esx.htm
 
Its mainly down to cost, we get the Server 2008 licenses as part of our MS agreement which would go to waste otherwise and management would be reluctant to spend money on something else they think they are already paying for.
We've been running Server 2008 core (which does take a little while to get used to I’ll agree) for 4 months with the Hyper-V role for our co-located web servers and SQL box with no problems. I guess that because we've never used VMWare we don't really know what we're missing?

Anyway, the VM of choice is by the by. Its the disk setup I’m most interested in getting right this time as I’m the one that has to justify it.
 
You mean boot the server 2008 core install directly from the SAN? I thought it was just easier to boot the server doing the virtualization from local disks, its the configuration of those disks I was bothered about. The choices are RAID1, RAID10 or RAID5
 

A good option, takes a brave decision maker to go with it though as it's generally resisted by the tech guys as it just doesn't seem right. The last place I worked we were pushing this approach for all server builds as it made back up and DR much more straightforward. Pseronally I say go for it provided your SAN network is up to iy.
 
A good option, takes a brave decision maker to go with it though as it's generally resisted by the tech guys as it just doesn't seem right. The last place I worked we were pushing this approach for all server builds as it made back up and DR much more straightforward. Pseronally I say go for it provided your SAN network is up to iy.

Have you seen my other thread about the new network we're planning :)
 
We've just shifted to esxi on iscsi, poweredge 2950 running 4 boxes from the buffalo on a separate gigabit network.

We noticed that you need to make sure the switch is up to the job cause the throughput is pretty intense.

On top of that we've moved in to double-take and livewire which, from what I hear seems to be hugely impressive but I havent had a chance to look at it yet personally.
 
Back
Top Bottom