VM host best disk config?

Soldato
Joined
3 Apr 2007
Posts
7,313
Location
South of the Watford Gap!
I'm getting a new server, well it's actually here and I'm wondering what the best way to configure the disks would be. It's got 8 300Mb 10K SAS disks connected to a hardware RAID controller.

Reading around I've sort of decided to create a RAID 1 pack of 2 disks for the OS and then use the remaining 6 disks as a RAID 10 pack. The query is and I've read conflicting reports, should I or do I need the separate RAID 1 pack for the OS or should I use all the disks in a RAID 10 pack then create a partition for the OS.

Some people reckon that keeping the OS on a separate pack is the way to go, others suggest that a single pack will allow you to make the most of the space and won't impact the VMs that are running on the host.
 
What hypervisor are you running as a host? If it's ESXi just install it on a USB stick and use all 8 HDDs in a RAID10 pool :)
 
I'm going to put Server 2012 on, haven't decided if I should then go for Hyper-V or maybe VMWare workstation 9 as I already have a license for it.

I've created a RAID1 using 2 disks for the OS and then configured the remaining 6 disks as a RAID10 pack. I have a backup of the VMs so worst comes to the worst I can always reconfig the RAID if need be.
 
RAID10 is nice and all that, but the amount of space you lose would stop me using it outside of an enterprise scenario. I'd opt for RAID5, unless there was something I really really needed the extra speed to run.
 
We always have the OS on a RAID1 array separate from any data. I'd say that your decision to go RAID10 is reasonable, mainly because RAID5 is going to be horrible for VMs due to the low random write performance.
 
What hypervisor are you running as a host? If it's ESXi just install it on a USB stick and use all 8 HDDs in a RAID10 pool :)

That's alright for a home lab but the quantity of writes made to the stick (logs etc) will degrade it pretty quickly and you'll have issues if you use it in anger.

No way I would use RAID5 for VMs unless you have specific constraints around space coupled with some very low IO requirements.
 
That's alright for a home lab but the quantity of writes made to the stick (logs etc) will degrade it pretty quickly and you'll have issues if you use it in anger.

No way I would use RAID5 for VMs unless you have specific constraints around space coupled with some very low IO requirements.

Move the logs. There's virtually no writes apart from that. VMware actually advise to use a small USB/SSD for boot now.
 
That's alright for a home lab but the quantity of writes made to the stick (logs etc) will degrade it pretty quickly and you'll have issues if you use it in anger.
Not so. ESXi automatically moves its scratch space onto proper storage as soon as it detects it. All our production HP Proliant blades boot ESXi off built-in SD cards.
 
Back
Top Bottom