Spec me a storage strategy for ESX + blades

Soldato
Joined
18 Oct 2002
Posts
7,139
Location
Ironing
I'm building an IBM blade environment that's going to run VMware. I'm starting out small, but storage requirements are a bit of a priority, especially in terms of space. Here's where I'm at:

I'm going with the IBM bladecenter S, as that has potential for built-in SATA/SAS storage. I'll stick a filer (probably solaris with ZFS) on blade #1, expose all the storage to that blade and then share it out via iSCSI to the other blades which will be ESX hosts.

However, I need some guidance on where to go with the storage. I've got a budget of about £1800 ($3500), and need about 2TB, would like 4TB. I'm not looking for blazing performance - just lots of cheap SATA space with enough speed to make running VMs on it not silly slow. Resiliancy would be nice though.

My original plan was to create a big RAID 10 array using the chassis SATA/SAS controller. However, it occurrs to me that I might also be able to do all this in software - ZFS. The halfway house would be to have lots of individual hardware RAID1 arrays and then just stripe those together in software, but I'm not sure what I'd gain through that. I've considered RAID5, but (a) I'm not sure the SATA controller supports it and (b), given the traditionally slow write speeds you get with RAID5, could you run virtual hard disks from the VMs on it?

The only other variable is that I might use openfiler instead of Solaris. But that's not a major factor at the moment.

So, any advice anyone might have would be great. :)
 
What are you trying to run in the VMs as disk I/O is usually the biggest bottle neck on VM's. I ran about 6-10 machines on a MSA1500 F/C at 2GB which i had fully spec'd with write cache etc and 15k 146GB drives and the performance was slow once anything began to put any disk I/O load on it.
 
It's a complete mix of stuff. A lot of it is reproducing development environments for code analysis, some of it is for remote scanning of targets. Not going to be doing any long term serving at the moment...
 
No what we are talking about is the seek time, random read and write times, not sustained read/write. You think what a vmware server is doing, each OS is running as a flat file (on possibly multiple disk) but is operating as if its a normal OS so jumpong around to read and write. Then you multiply this behaviour by multi OS's and the performance just plummets. The number I tend to work on is 6 Vms per disk group depending on what they are doing, with stuff like SQL servers having log, os and data spread across disk groups.

What the number your interested in for this sort of storage is IOps, not raw throughput.
 
Ok, I'll do a bit more research. I've read about a bit and some people are reporting reasonable speeds using SATA, but only in RAID 10 with a lot of spindles.

The only alternative on my budget is SAS, so I'll have to figure out what I can get for the money and whether that'll be enough space.
 
Back
Top Bottom