Help with VMWare Datastore issue

So first of all, this was a little home project that I setup a number of years ago, all of which was in perfect working order until life got a little busier.

The Server, a basic wee HP ML10v2 was setup with VMware so that I could create vm's as and when I pleased. I used to dabble in computer networking and infrastructure support, but those days have long passed.

Initially, I had a single Server 2012R2 vm which I used as the backend for Plex. And with that, I had assigned a single 4TB drive as the main media store. No RAID (I know, I know).

There server has been powered down the last 3 years and upon booting up, everything was operationally sound, but unfortunately I couldn't log in to my server 2012 vm as I forgot my credentials (Again, I know :rolleyes:).

I therefore created a new, simpler Windows 10 vm in the hope that I could simply migrate the datastore across, but this is where issues began to arise. No matter what way I configured the new vm in ESXI, that is, One Hard Drive to act as c:/ drive, and a second drive, call it d:/, to act as the main storage for Plex files, I couldn't get access to the 3.5TB worth of media I had previously accumulated (:() due to 'disk capacity' errors.

After much deliberation, and probably with the fact that im a complete novice when it comes to working inside ESXI, I decided to bite the bullet, unmount the 4TB drive, reformat it, and create a new datastore. But the **** hit the fan last night. Whilst everything had been working well in that I appeared to have setup the new vm correctly, and had worked tirelessly rebuilding my media store with some 450GB worth of media, the vm became unresponsive and within ESXI, I could no longer power it on because it had ran out of space, even though there was plenty of room left on the basic 500GB c:/ drive, and loads of room on the bulk 4TB drive.

I'll drop in some screen shots below for an overview of things. Even though there is two datastore, whenever I try to assign the bulk media store to hard disk two, the vdmc file always defaults to the smaller datastore and I have a feeling this is something to do with it, even though within windows 10, I have pooled the drives in order to access the larger 4TB drive, a screenshot I'll also include.

Ive therefore gone ahead, reformatted all my hard work again, setup a new vm, pool the drives in order to create a second E:/ drive, but I dont want to begin work until I know for sure that I have access to more than 400GB. NB: When creating the media store this time, I said the drive was larger than what it is in the mad notion that I would be able to increase capacity should I come up against the dame problem. it doesn't sound right to me, but that's why im here looking help before I begin work to rebuild my media library.

 
Last edited:
Hi Nade, thanks for taking the time to help.

No, when I created a new vm last week I created a second virtual hard disk and gave it its full capacity (4TB) but I then ran into issues - probably because it's linked to datastore1. I done 16TB this time round on a whim hoping I wouldn't run into the disk space issue again. makes no sense I know.

The biggest problem I think I have is what you have noticed, that both disks are within the same datastore, "Datastore1". For the life of me I dont know how to setup hard disk 2, the 4TB drive, to connect with the second datastore. Even when I select the second datastore when creating a new vm, it always defaults back to Datastore1 and thus, a maximum capacity of 408GB.

As for the Thin provisioning. This is also where im unsure and simply went with the settings of what was in place in the very beginning. Happy to change that if need be.
 
Last edited:
For the larger vmdisk you will need to either migrate it to the larger datastore. Or if there is nothing yet stored on the vmdisk itself, you could delete and then create a new disk.

If this is full blown ESXi (from the screenshots I'm guessing this is 6.x), then when you create/add the new disk you can expand the options and change the placement (should say something like "store with vm") and select the larger datastore. ESX will then just create datastore folder to match the VM name and create the new vmdisk in the folder. You will also see an option to switch the disk to thick provision (lazy zeroed) if you wanted to change that as well at this point.
Should be loads of documentation online with comparisons between thin and thick provision vmdisks. There's nothing wrong with using thin, but you have clearly over-allocated compared to how much actual storage space you have available. So if you stick with thin and the size as it is, then you'll run with the risk of the physical datastore running out of space at the point where the VM writes out to the disks at ~3.6TB.

Evening Nade, sorry for only getting back to you now.

Ok, so I went ahead and deleted the new datastore as it was still empty. Just quickly, I did try going into the datastore browser before hand and moving the vdmk file associated with the larger drive, but nothing happened.

As for setting up a new datastore - that was all relatively simple, however, it doesn't give me the option to choose Thick or Thin, and once created, it sets it up a thin by default.

After looking at various settings within ESXI, I cannot see how I change this. Would I need to change anything within the server bios regarding this or am I getting far too ahead of myself?


 
Last edited:
Im unfamiliar with that mate, but I'll take a look at it here now.

I should have noted that what I like about having the windows 10 vm is that I

A) Have plans on installing another Blu-ray RW drive so that I can rip my current and future bluray collection to my server (some, not all)
B) To download media straight to media store.
 
Sorry guys, been away for a few days and only getting back to this.

Suffice to say im at a complete dead end. I cannot for the life of me get this to work. At one stage I decided to unmount the 4TB drive and then I couldn't figure how to get it back. Thankfully I did.

Anyway, ive even tried moving the Windows 10 iso to the larger datastore and just make a single vm with a single 3.5>4TB drive but for some reason, every time the setup gets to 71%, it crashes.

What's the simplest solution here guys - without having to resort to purchased a plain old NAS.

I was even thinking of breaking out an old PC, or even a half decent thin client that's gathering dust, connect up the 4TB drive and then remote on to it in order to setup Plex, but seems nonsensical knowing I have the capability of creating VM's in minutes, all of which was working 100% previously.

I know im doing something wrong, but by **** I can't see what :D
 
Hi mate, I've been working with ESXi for years so hopefully can help you out a bit here. From what I can gather you have a server with 2 hard disks installed. These hard disks each have one virtual disk stored on them. These virtual disks can be set to a larger size than the underlying physical disk if they are set to thin provisioned. this allows for overallocating the storage but can cause problems if the actual data size gets near to the underlying physical disk size as this is a hard limit. Windows (or whatever OS you have installed) cannot see the physical disk and believes that it has however much space you have said is the size of the virtual disk which appears to be what has happened here. If the virtual disk files still exist then potentially you can get the data out of them. First off enable TSM-SSH on the ESXi host and then open an SSH session to your ESXi host. Once connected run ls /vmfs/volumes which will show you the volumes you have on your machine. there should be 2 volumes that match your datastore names (from what I can tell these are Please Media Storage and datastore1). Can yoiu please run ls -l on each of these (ls -l /vmfs/volumes/datastore1 and ls -l "/vmfs/volumes/Please Media Storage") and post the results here then I can get an idea as what you have currently. What may be the safest thing would be to manually create the Plex Media volume using command line and get it to fiull the physical disk, this way when it is running out of space this will be visibil in the guest OS as well. Note: as this is a Linux based OS everything is case sensitive so bear that in mind.

Cheers

Hi Chris, thanks for replying.

So, a couple of updates since my last message. Unfortunately the 4TB has since failed. I dont think this was the problem all along, just sheer bad luck.

Whilst I still have it up and running on the main 500GB drive, im planning on changing things up during/after Christmas as 500GB simply isn't much to play with for what I want.

As easy as it would be to simple populate a NAS with a couple of large drives, it would be a shame todisgard the Proliant given that its in good working order.

Therefore, couple of plans im thinking of:

Plan A:

Clone the existing 500GB boot drive on to a new 500GB SSD

Purchase 2 x 4TB SSD and go RAID0 again on the basis that SSD should provide a little bit more reliability. In time, I could then purchase a 3rd and 4th.


Plan A:

Clone the existing 500GB boot drive on to a new 500GB SSD

Purchase 1 x 4TB SSD and and then a single 4tb external hard drive to use as a back up solution.

Plan C:

Clone the existing 500GB boot drive on to a new 500GB SSD

Purchase 2 x 8TB SATA and implement RAID1 so that if a disk was to fail, I wouldn't have to start all over again. No backup


Plan C:


Clone the existing 500GB boot drive on to a new 500GB SSD

Purchase 1 x 8TB SATA and 1 x 8TB external hard drive. Create a single W10 machine with 8TB capacity and then configure the backup to go to the 8TB external drive. In the event the internal 8TB was to fail, I can utilise VMware to create a new vm and then migrate my media back over

All the while im typing this im not really thinking of what could happen the main 500GB SSD with VMware, therefore, im all ears as to additional "plans".

Basically I want maximum storage for as little cost as possible. The only reason im thinking of SSD was that they might or should be less prone to failure. Not sure if they would add to Plex performance, but happy to be corrected on that front.
 
Back
Top Bottom