ESXi Server storage config

Associate
Joined
29 Nov 2005
Posts
1,125
Planning to install ESXi on an HP DL350P Gen 8 server, and am trying to figure out the best storage approach.

For the guest VM OSs I plan to utilse the following hardware in the server:
8 x 600GB 15K 12Gb/s connected to an HP Smart Array P340

For data storage:
Internal HP Smart Array P420 connected to an external SAS expander in a 24 drive bay case. The main usage of this will be for a file/media server to be managed by one of the VMs.

At the moment I'm thinking:
Boot ESXi from a USB stick.
Create a RAID 10 array on the P340 controller, then allocate this out to VMs via vSphere etc.

Not sure about the data storage.
One of the VMs will be doing P2P/torrent, and I will allocate a dedicated disk in the 24 bay case for this.
Otherwise, there will be 12 x 3TB drives to start with, with room for expansion. One of the VMs will run Ubuntu and will perform file sharing, streaming, Plex etc. What's the best approach for configuring the storage?
 
Do you trust the USB stick to not fail? All that RAID and the OS is on a single disk, if it fails then you lose all your config.

Take a small partition for the OS and use the rest for the VM store. Or bang in 2 cheapo disks for the OS.

Our hosts at work use 2 x 240GB SSD in RAID1 for the OS, lose about a GB for it then use the rest as a read cache. Then storage on a Tintri array via 10gig ethernet. This is on DL380 G9.
 
Install ESXi onto a mirrored pair, the spare space can be used as a data store for ISO and templates ;)

As above, all that tech for storage I wouldn't put the OS on a USB, it's not if it will fail but when, had a few USB installs fail on me so no longer trust them.
 
I was originally planning on replacing the USB every 3 years as they do have limited writes. Had hoped that the BIOS would support mirroring of USB, but that's quite rare.

Still trying to figure out the best configuration for the large data storage array though.
 
Who cares if the OS USB fails? All you do is reinstall ESXi on a new USB stick, and re-register the VMs (by browsing the datastore directories). Also, the USB failing won't affect the running OS, as it becomes read-only as soon as it finishes booting. Also, because it never writes to the USB stick, there's no danger of it wearing out. It will fail like anything else, but there's no need to be particularly precious about the fact that the OS is not resilient. This is not Windows or Linux where there is anything important configured in the OS. Literally the hostname and IP, what VMs are on it, and that's it. Shouldn't take more than half an hour to get back up and running. Another advantage of using USB is that you can easily pull it out and swap it for another if you want to test a new version of ESXi (or another hypervisor) without wiping and starting from scratch.

As for production environments booting off mirrored SSDs... that's terribly cost-inefficient. A P422 + 2 SSDs is bordering on £1500? That's another 128-256GB RAM you could be putting in that host. I have over 100 VM hosts in my environment, so £150,000 extra... not trivial money.
 
Planning to install ESXi on an HP DL350P Gen 8 server, and am trying to figure out the best storage approach.

For the guest VM OSs I plan to utilse the following hardware in the server:
8 x 600GB 15K 12Gb/s connected to an HP Smart Array P340

For data storage:
Internal HP Smart Array P420 connected to an external SAS expander in a 24 drive bay case. The main usage of this will be for a file/media server to be managed by one of the VMs.

At the moment I'm thinking:
Boot ESXi from a USB stick.
Create a RAID 10 array on the P340 controller, then allocate this out to VMs via vSphere etc.

Not sure about the data storage.
One of the VMs will be doing P2P/torrent, and I will allocate a dedicated disk in the 24 bay case for this.
Otherwise, there will be 12 x 3TB drives to start with, with room for expansion. One of the VMs will run Ubuntu and will perform file sharing, streaming, Plex etc. What's the best approach for configuring the storage?
I wouldn't bother with all those disks for the VMs. Get yourself a single 500-1000 GB SSD (or two in RAID 1 if you are rich), and pop all the VMs on it. Use thin provisioning and you'll struggle to ever fill it (unless you are putting data on the VM OS disks). For the external you have two options: either create a giant RAID 6 array and put appropriately sized VMDKs on it, or carve it up roughly how you would if this was physical (e.g. a bunch of RAID 1 arrays), and present them to the VMs as RDMs (Raw Device Mappings).

For data, I'm a big fan of RDM, because it means your data is portable and transparent (i.e. you can access it without vSphere).

TBH you've totally over-engineered this; I understand the desire to play, but this is going to cost you a fortune in electricity, not to mention the noise of all those disks whining and fans whirring, and the heat it's going to be pumping out 24x7 -- without exaggerating, this is going to cost you £300+ a year to run. I would recommend a separate device (like a little 2 drive NAS) with your actual real data (your photos, files, etc) that can run 24x7, and then have this as your play lab, where you can power it on/off without impacting your real data storage.
 
Useful suggestions - thanks.

I had been thinking of using RAID 6 for the main data store and just passing it as a raw device (as per your RDM comment).

Yes - this is getting slightly OTT. The server is going in a cupboard under the stairs (with PFSense router, switches, NVR, etc) that has an external vent with extractor fan (can also be switched to pump warm air back inside the house during the winter). This is intended to consolidate 4 machines that I have running 24/7 at the moment.

You make a good point about the electric costs - I may end up just keeping the DAS and swapping out the main server for something that isn't so thirsty. I guess running SSDs instead of the 15K drives would help - although the 15K drives are 2.5".
 
As for production environments booting off mirrored SSDs... that's terribly cost-inefficient. A P422 + 2 SSDs is bordering on £1500? That's another 128-256GB RAM you could be putting in that host. I have over 100 VM hosts in my environment, so £150,000 extra... not trivial money.

Agreed, booting off a pair of expensive enterprise grade SSD would be crazy.

On a large production environment I would not be booting off of internal storage either, certainly not USB, the hosts / hypervisors should only have memory and CPU resources, all storage would be external and shared, however if you did decide to use an internal mirror for the OS there is no reason to use SSD, I would rather a pair of enterprise grade HDD over a USB stick though.
 
Back
Top Bottom