I'm building an IBM blade environment that's going to run VMware. I'm starting out small, but storage requirements are a bit of a priority, especially in terms of space. Here's where I'm at:
I'm going with the IBM bladecenter S, as that has potential for built-in SATA/SAS storage. I'll stick a filer (probably solaris with ZFS) on blade #1, expose all the storage to that blade and then share it out via iSCSI to the other blades which will be ESX hosts.
However, I need some guidance on where to go with the storage. I've got a budget of about £1800 ($3500), and need about 2TB, would like 4TB. I'm not looking for blazing performance - just lots of cheap SATA space with enough speed to make running VMs on it not silly slow. Resiliancy would be nice though.
My original plan was to create a big RAID 10 array using the chassis SATA/SAS controller. However, it occurrs to me that I might also be able to do all this in software - ZFS. The halfway house would be to have lots of individual hardware RAID1 arrays and then just stripe those together in software, but I'm not sure what I'd gain through that. I've considered RAID5, but (a) I'm not sure the SATA controller supports it and (b), given the traditionally slow write speeds you get with RAID5, could you run virtual hard disks from the VMs on it?
The only other variable is that I might use openfiler instead of Solaris. But that's not a major factor at the moment.
So, any advice anyone might have would be great.
I'm going with the IBM bladecenter S, as that has potential for built-in SATA/SAS storage. I'll stick a filer (probably solaris with ZFS) on blade #1, expose all the storage to that blade and then share it out via iSCSI to the other blades which will be ESX hosts.
However, I need some guidance on where to go with the storage. I've got a budget of about £1800 ($3500), and need about 2TB, would like 4TB. I'm not looking for blazing performance - just lots of cheap SATA space with enough speed to make running VMs on it not silly slow. Resiliancy would be nice though.
My original plan was to create a big RAID 10 array using the chassis SATA/SAS controller. However, it occurrs to me that I might also be able to do all this in software - ZFS. The halfway house would be to have lots of individual hardware RAID1 arrays and then just stripe those together in software, but I'm not sure what I'd gain through that. I've considered RAID5, but (a) I'm not sure the SATA controller supports it and (b), given the traditionally slow write speeds you get with RAID5, could you run virtual hard disks from the VMs on it?
The only other variable is that I might use openfiler instead of Solaris. But that's not a major factor at the moment.
So, any advice anyone might have would be great.