Does anyone know if you can use 2TB+ vmware disks on ESXi?

Associate
Joined
29 Aug 2012
Posts
6
Hi all, I am trying to add a 2TB+ drive to a RHEL 6.3 server on a ESXi 5.1 server.

The server has 4x12TB volumes directly attached which I have all ready added to the Datastore and formatted with VMFS-5 which it said was needed to user 2TB+ disk sizes.

However when I go to add an additional hard drive to my Linux VM it only allows me to choose 2TB max! Grrr....

Not sure if there are some prerequisites that I am missing but I have been searching and nothing obvious stands out to me as yet.

Wondering if anyone has an idea on this one?

Cheers,
Sam
 
As DiscoDave has said, a Virtual Hard disk is still limited to 2Tb, so if you want to go above this in your VM, you need to assign multiple virtual drives to your VM, and either span them or software raid 0 them to present your VM with your "single" hard disk. Or you can use RDM as Disco Dave mentioned.

One warning though - if you do decide to create a 2TB disk, don't make it 2TB - make it slightly smaller otherwise you'll never be able to snapshot the machine. There is a small overhead for snapshots, so while you can create a 2TB virtual disk, the virtual disk snapshot file + snapshot overheads will exceed the 2TB limit so that virtual disk won't be snapshotable.
 
Thanks for your replies guys. I am currently running some IO Load test tools to compare performance using the method you both described above vs Raw Device Mapping (RDM) where I have been able to add a single 12TB disk but I know this is not a supported method.

Any feedback or opinions on using RDM's?

Ideally I wanted to stay away from having to use a logical volume manager if I could but it seems like I won't be able to do that if I want to remain in a supported set-up.
 
I know this is not a supported method.

How is this not a supported method? is this a constraint on a specific piece of software you're using?

Any feedback or opinions on using RDM's?

There are instances and scenarios that benefit from RDM. Personally, I've not come across one yet. May we ask what software you're using that requires this setup?

My opinion on RDM:

- Better for I/O intensive applications.
- Overcome some VMFS limitations (such as size, as you've found out).
- Better clustering support (ie Microsoft Cluster Service).
- Wanting to leverage SAN-based tools (such as snapshots, performance monitoring, management tasks etc).
 
Thanks for that DiscoDave. The application which I will be running on the server is a backup application which runs best with as few as possible mount points for the data.

Also on my preliminary tests the write through put has been faster when using a RDM drive when compared with the LVM and grouping 6 x 2TB drives per mount point.

Still keeping an open mind. I am not too concerned about if I will be running an unsupported configuration as this will be a replica of the production backup server anyway.
 
Back
Top Bottom