Help! - WDRED 3tb 0.0b -128

Joined
25 Sep 2011
Posts
3,859
Please help guys, spent two days on this and just pulling hair out now.

Setup:

HP MicroServer N54L
1 x 120gb SSD (OS)
2 x WD Red 3TB (Storage) - RDM Passthru
ESXi 5.1
Windows Server 2012 Essentials

So Ive just purchased those two harddrives and no matter what I do, I cannot get WSE2012 to pick them up properly.
Just mounting the hardware RDM's to WSE it will see them in DiskManagment but will not let me initialize or anything.. with error message such as "the specified disk is not convertible because the size is less than the minimum size required for gpt disks". Using tools like DiskPart and everything else under the sun all come back with the same sort of responses.

Plugging them both into my separate Win7-64bit machine they work perfect. Change them to GPT and format to NTFS and wonderful, straight away absolutely fine.. So ok, plug back into server..

Now the Server see's the two drives, and the two partitions I made and all works fine. Apart from the fact in DiskManagement it shows the disks as "-128".. and when I try to delete those partitions to try and setup RAID which is my end goal here, it throws anothers hissy fit like above.

I literally give up, I've googled to death. Theres only 2 slight mentions about things like this on ocuk, 1 is a dead end slight reference to the -128.. the other is a current thread: In which he seems to be having very similar sort of issues with the 0.0b, someone has given this Link which describes the problem exact but not for my machine (I think).

Also, HP have This driver which I thought might help... but it refuses to install in a VM.

Please, please help. Thankyou.
 
It's not an HP issue, it's an issue with ESXi and 3TB RDMs (and larger I'd imagine). I don't know if you'll be able to do what you want to, but for me I'm able to use the drives as normal volumes after formatting them on another system first before doing the RDM (as you've discovered). Make sure you use the vml address of the drive when you create the RDM as that can help sometimes.

Having been using the volume on the drives listed as -128 bytes for sometime, I can confirm they work absolutely fine and even the SMART data comes through ok.

If you absolutely need them in a Windows RAID configuration you could try setting it up on a different system and then hooking them up to your VM. Windows should recognise the RAID volume.
 
I believe you've hit up against the 2TB RDM limit, if you can try creating one with a size of 2TB minus 512B and it works then you'll have confirmed it.

I also believe that the newly released ESXi 5.5 has raised this limit to some insane amount, so you might also try it and see if that fixes the issue.
 
Having been using the volume on the drives listed as -128 bytes for sometime, I can confirm they work absolutely fine and even the SMART data comes through ok.

If you absolutely need them in a Windows RAID configuration you could try setting it up on a different system and then hooking them up to your VM. Windows should recognise the RAID volume.

Nice to hear that they do work properly like this. And yes, the latter is the next thing on my list to try.. probably wont get a chance till Sunday now tho.

I believe you've hit up against the 2TB RDM limit, if you can try creating one with a size of 2TB minus 512B and it works then you'll have confirmed it.

Some confusion there I think..

Unless I'm mistaken.. Nope just checked I'm correct:

A single Passthrough (physical) RDM has a max limit of 60TB.

A single .vmdx file has a limit of 2TB.

In this instance, I am using RDM's so the limit there isnt a problem. Splitting each disk in half using .vmdx's would technically work, but then the speed implications of that in WSE make it unfeasible.

Yes, try 5.5 and please report back as others have hit the same problem.

To be honest this is the first I've heard about 5.5.

Will do some reading over the weekend.

However, I'm still not quite sure whether to blame ESXi or WSE.
 
Last edited:
Yes, created with -z switch not -r
In that case, you might have to give vSphere 5.5. a try. The only problem there, is that to get those juicy new maximums, you will need VM version 10 (the virtual hardware version of the VM), which can only be managed from the Web client (not the GUI client), which makes deploying 5.5 a lot trickier.

VMware has really made an ass of where they are headed with the web client.
 
It's tricky, because there are plenty of places that refer to Pass-Through RDM up to 60 TB on 5.1. Hence my question about physical compatibility mode.

R yeah I see what your saying, I saw the reference on that link to virtual compatibility, and presumed this was the limit that applied when passing through to a virtual machine.
 
In that case, you might have to give vSphere 5.5. a try. The only problem there, is that to get those juicy new maximums, you will need VM version 10 (the virtual hardware version of the VM), which can only be managed from the Web client (not the GUI client), which makes deploying 5.5 a lot trickier.

VMware has really made an ass of where they are headed with the web client.

Ok, Will try this one night this week..

Sounds like a headache in the making!
 
In that case, you might have to give vSphere 5.5. a try. The only problem there, is that to get those juicy new maximums, you will need VM version 10 (the virtual hardware version of the VM), which can only be managed from the Web client (not the GUI client), which makes deploying 5.5 a lot trickier.

VMware has really made an ass of where they are headed with the web client.

Not true, I've updated a VM to level 10 using the GUI client
 
Yes you can upgrade to v10 Hardware, but you then can't manage the server after that, except using the web client, which is only available as part of vCenter and it isn't free.
(Apparently you can bodge it and use VMWare Workstation 10 to do it though).

So for a home user with no vCenter server, once a server is upgraded to v10 you can no longer add disks, memory, CPU's etc. That'll learn you to use their free stuff for free, you swines!
By extension, anyone who does have a virtual vcenter server (as we do), can't upgrade it to hardware v10 as you'll then never be able to add CPU or memory, or anything else that requires the server to be shutdown.

Honestly, they've really lost the plot on this one. All the previous unpopular changes to licensing they've made over the years have at least been driven by making more money, this is to force people into adopting the webclient.
 
Honestly, they've really lost the plot on this one. All the previous unpopular changes to licensing they've made over the years have at least been driven by making more money, this is to force people into adopting the webclient.
Completely agree.
 
I'd like to know how one is supposed to deploy a new environment without the use of the GUI client?
 
I'd like to know how one is supposed to deploy a new environment without the use of the GUI client?

When you create the VM you have the option for hardware version. Nothing to stop you creating a new VM with v8 or v9 for your vCenter, then upgrading it once you've got the web client sorted out. In fact currently on my unpatched 5.1 environment I can't create a v9 VM off the bat, v8 is as high as I can go followed by a hardware upgrade to v9.
 
Back
Top Bottom