ESXI data storage question?

Soldato
Joined
5 Nov 2011
Posts
5,578
Location
Derbyshire
Hello,
I can't find the answer easily so hoping someone can give me a yes or no.

I'm going to be setting up an N54L microserver and want to run ESXI on a USB stick inside the box.

VM images will go onto the 250GB HDD in the 5th bay.
Then this is where my question comes,
The 4 bays below will each have a 1TB drive in them, can I allocate each as a data store and then allocate all 4 to one VM?

Reason being is that I want to spin up a Ubuntu server and want to give it 4x1TB so it can software raid10 them.

I can't find an answer anywhere, all the googling brings up how to create a datastore and not multiple datastores for one VM?
 
Not quite sure what you're getting at with your second post, but fundamentally your original post will work ok. If you format each 1TB drive as VMFS and create a VMDK (virtual disk) on each, you can present the virtual disks to the VM and RAID away as you like.

However, I would suggest you create what's called a Physical RDM for each of your 1TB drives and present these to the VM instead. This means you don't need to format each as VMFS and create VMDKs. It cuts out an unnecessary layer provided you're happy for your guest OS to look after the disks directly (usually preferred for software RAID).

The additional benefit to this is that if your guest OS gets hosed, and even if your server gets hosed, you can pull the drives and drop them into a normal workstation and recover your RAID in another Linux install. That's a useful safety net for paranoid folk.

Finally, be aware that the performance of your (presumably mechanical) 250GB drive is going to nose dive quickly as you put more VMs on there. You'll probably get 100 IOPS from it on a good day and they will go quickly with more than a couple of VMs running. An SSD would be a better idea if you can.

On the other hand, if you're only planning on running Ubuntu then ditch the complication of the hypervisor altogether ;)
 
On my own setup I added the Sata drives as RAW volumes by following these instructions http://www.vm-help.com/esx40i/SATA_RDMs.php, then created a NAS4FREE VM and attached the RAW volumes. I then setup up in NAS4FREE a ZFS storage volume single parity (there other RAID options i.e. mirrored etc) using these volumes, created a NFS share which I then attached to all my ESX servers as a shared Datastore. I can now easily move VM's from one ESX host to another by removing and adding inventory. You could take it to the next step and introduce SRM to automate it? To improve performance you could add an SSD to the ZFS volume as a disk cache, doesn't have to be every big either 5Gbytes plus. Extra memory helps with ZFS performance too. Don't be to eager to enable compression or dedup etc in ZFS volumes these options need quite a bit of horse power to run smoothly. The only issue with this setup is you can't setup S.M.A.R.T monitoring on the disks from within NAS4FREE, well I haven't be able to get it working to date. It does however tell you if they appear to be in good health and emails me status report every 4hours.

Using 4*2Tbyte Tosh HDDs from OcUK ( :) )and 30Gbyte SSD as a cache for my ZFS store, a VM running Server 2008 I get a disk performance of about 60MBs Reads 40Mbs writes which is perfectly OK for a LAB environment. With more tweaking I could possibly improve this, particularly around the networking of the Storage?

I also use NAS4FREE as my uPNP server when it's not being used in myLAB.
 
Not quite sure what you're getting at with your second post, but fundamentally your original post will work ok. If you format each 1TB drive as VMFS and create a VMDK (virtual disk) on each, you can present the virtual disks to the VM and RAID away as you like.

However, I would suggest you create what's called a Physical RDM for each of your 1TB drives and present these to the VM instead. This means you don't need to format each as VMFS and create VMDKs. It cuts out an unnecessary layer provided you're happy for your guest OS to look after the disks directly (usually preferred for software RAID).

The additional benefit to this is that if your guest OS gets hosed, and even if your server gets hosed, you can pull the drives and drop them into a normal workstation and recover your RAID in another Linux install. That's a useful safety net for paranoid folk.

Finally, be aware that the performance of your (presumably mechanical) 250GB drive is going to nose dive quickly as you put more VMs on there. You'll probably get 100 IOPS from it on a good day and they will go quickly with more than a couple of VMs running. An SSD would be a better idea if you can.

On the other hand, if you're only planning on running Ubuntu then ditch the complication of the hypervisor altogether ;)

Thank you.

I want to ESXI it because primarily I've never done it and want to learn and because I know I'll install it as a full blown ubuntu server then wish I could run another machine on it so I'm kind of trying to pre-empt from the start.

Point taken about HDD performance though so I may end up running the 250GB HDD for 1 or 2 disk images then just use the RAID10 to hold ALL of the ubuntu server data including OS rather than having an OS AND a storage.
I shouldn't have to mess about wit FSTAB auto mounting and stuff then :-)
 
Back
Top Bottom