vSphere Hypervisor ESXi 24tb Server issue

Soldato
Joined
6 Feb 2004
Posts
3,450
Location
Wiltshire
Hello,

I have a Dell PowerVault NX3100 with 24TB of disk storage. I've installed ESXi and setup a computer that I use with vSphere. vSphere can see 21TB of usable storage.

vSphere connects to the server, but I can not create a VM system more than 2TB's. I wanted to create a system of 6TB and install Server 2008 std 64bit.

And create a 2nd VM machine of the remaining space.
Why is vSphere only letting me create a datastore up to 2TB?

Any ideas?

If this is a limitation, how do I do what I want? 1x 6TB and 1x 18TB server?
*this is the first time of setting up vSphere Hypervisor ESXi
 
Last edited:
• 1MB block size – 256GB maximum file size
• 2MB block size – 512GB maximum file size
• 4MB block size – 1024GB maximum file size
• 8MB block size – 2048GB maximum file size

Not sure you can over come that. Horrid as it may be but if a RDM isn't possible then you may need to look at a software based JBOD / RAID of multiple VMDKs.
 
Last edited:
Our scenario is that we have 2 locations, 2x PowerVault nx3100 and 8x md1200 equalling 120TB at each location

The idea was to run a 'small' VM server of about 4-6TB instead of buying more hardware..
I understand what you have said but is there any solution you can recommend. I'm just trying to utilise the hardware we have.

And no it wasn't our decision to have these Dell units. Main office got them not even knowing our storage requirements. LOL

I'm not going to mention the other 2x nx3100's as spares (we call them spares as there useless to us - 24tb is overkill for a small DC).:eek::eek::eek:
 
Last edited:
Create the disk's as 2TB's and let 2008 span across them to form the one partition, or however many it is you want.
 
Our scenario is that we have 2 locations, 2x PowerEdge nx3100 and 8x md1200 equalling 120TB at each location

The idea was to run a 'small' VM server of about 4-6TB instead of buying more hardware..
I understand what you have said but is there any solution you can recommend. I'm just trying to utilise the hardware we have.

And no it wasn't our decision to have these Dell units. Main office got them not even knowing our storage requirements. LOL

Im not going to mention the other 2x nx3100's as spares (we call them spares as there useless to us - 24tb is overkill for a small DC).:eek::eek::eek:

RDM - Raw Device Map - as in the VM guest OS will see the raw LUN rather than seeing a VMDK within a datastore. This should allow you to get above 2TB.

Or create 3 x 2TB virtual disks (sorry I refer to them as vmdk) and then within the guest OS jbod them or raid them at the OS level.

I personally always try to avoid JBOD / software raid.

You may want to consider if you'll ever use vmotion and other things. You can easily paint yourself into a corner with this without proper planning.
 
As above go with RDM, no way would I use the Win 2008 method unless it is desperate (like very)

This limit isn't just confined to VMware either, it is the same for XenServer (vhd limits).

Thats some nice phatt storage!
 
I would agree RDM is the way to go if the option is available to you.

As ecksmen said, JBOD/Software RAID isn't really even an option in my eyes. If the RAID is done at hardware level there is almost no need that I can see of to have it done on a VM, especially at software level.

From what I can see, the Dell PowerVault NX3100 acts as server for the hypervisor and NAS in one?

Looks like a nice bit of kit, especially if you expand and use the possibility of attaching other SAN disk arrays. Surely that would meet your requirements of 300TB? It would also give you maneuverability in terms of using vMotion or similar HA/DR bits of software later down the line?

Just interested and sparking conversation really :)
 
From experience I would consider multiple 1.8TB RDM's presented to the Virtual Machine as the default approach with, one major consideration.......

What is your backup and restore approach? in terms of product and technology this will probably decide whether you can go RDM or need to stick with VMFS.

The other option, which might be of consideration, is to mount the SAN storage inside the VM using a software ISCSI initiator ;)
 
Back
Top Bottom