The (un)Official VMWare ESXi thread.

Has anyone tried connecting SAN storage to ESXi using RDM. Any good / bad points to consider ?.

I have a 20Gbps Infiniband link between a Solaris SAN and my ESXi 5.1 (free) server. I would prefer to connect via RDM so the OS and files are on the disks in their native format rather than on a VMFS file 2TB max file. I have tried this with the SRP (SCSI RDMA Protocal) and used RDM to mount the target and it works fine within Windows 2012 but I am getting very slow speeds (10MB/s) copying file from my workstation to the server. I am also seeing the same speeds with a SRP target mounted on the ESXi server (non-RDM) and formatted to VMFS. I am half suspecting it is an issue with my workstation as boot times are now very slow for a machine based on a Intel 520 SSD but have not yet found the time to really find the problem.

So, the question...

Are there any speed benefits / issues mounting a san share (FC / iSCSI / IB etc) using RDM rather than just mounting as a standard drive and formatting to VMFS ?.

Thanks
RB
 
I'm going to knock up a small lab machine to test a few things out.

As ESXi is bare metal, how does it see a RAID controller. Are the drivers built in for things like onboard motherboard RAID controllers or do I need to look for an ESXi driver?

Further to above, I have 2 x 500GB disks that I'm going to use in RAID1. Do I need a seperate boot volume and if so is a 4GB USB2 stick ok for this?

Last question, I've got a couple of VMWare Workstation 9 VM's. Can I import them and is there an easy way to bring them over? (Network or USB HD?)

Thanks! :)
 
1) You would need to check against the VMWare hardware compatibility guide.
I would also recommend checking your NIC on the list as well that tends to be the thing that many people forget and rue later.

2) No, but a 4GB stick is perfect - I do exactly the same thing.

3) Yep, use VMWare converter
 
So, the question...

Are there any speed benefits / issues mounting a san share (FC / iSCSI / IB etc) using RDM rather than just mounting as a standard drive and formatting to VMFS ?.

Thanks
RB

RDMs *may* have a minute percentage reduction in latency for things like SQL, but it needs to be tested in your environment.

RDMs cannot be used in conjunction with FT.

If using vMotion/DRS/Storage vMotion, the RDM must be accessible to all hosts (much like shared storage).


VMFS-3 had a 2TB limit, VMFS-5 has a 64TB limit.
 
RDMs *may* have a minute percentage reduction in latency for things like SQL, but it needs to be tested in your environment.

RDMs cannot be used in conjunction with FT.

If using vMotion/DRS/Storage vMotion, the RDM must be accessible to all hosts (much like shared storage).


VMFS-3 had a 2TB limit, VMFS-5 has a 64TB limit.

Thanks for the feedback. This is for a home setup so using ESXi Hypervisor (i.e the free version).

I tracked the slow speeds down to a some sort of network issue. Bypassing one of my routers corrected the problem but I have not yet worked on finding what on the router / cables is causing a drop to 100/Mbps.

RB
 
Thanks for the feedback. This is for a home setup so using ESXi Hypervisor (i.e the free version).

RB

No worries. Went on the 5.1 ICM course last month and I enquired about SQL performance, that's why I remember it!

I'm trying to study for the VCP5-DCV currently, but between work and the baby, I'm struggling to find time.

For anyone that's studying or needs a home test lab setup, without a room full of servers, this is a great blog that can relatively easily be modified for 5.1:

http://boerlowie.wordpress.com/2011/11/30/building-the-ultimate-vsphere-lab-part-1-the-story/
 
I don't get who is buying SuperMicro servers. The most important aspect of a server for a business is the support. I guess if you are buying SuperMicro servers from some reseller that also provides 24x7 support, then sure, but again, why? HP/Dell/IBM have refined their support model down to a pretty fine art, and it's not like SuperMicro is cheap!? I just don't get it.

To answer the original question: if it is for work, and downtime matters, do not buy a white box; always go for a supported solution.

It's a little bit ironic that you argue so strongly about manufacturer hardware support and SLA being critical in a virtualisation thread; sure it has to be on the HCL to be supported by VMware but otherwise the physical kit is consider throw away when virtualising, hardware abstraction is half the point.

If uptime is key, don't run on a single host and have enough capacity to run operations from other machines. Other than the HCL there's lots of reasons to use commodity hardware.
 
It's a little bit ironic that you argue so strongly about manufacturer hardware support and SLA being critical in a virtualisation thread; sure it has to be on the HCL to be supported by VMware but otherwise the physical kit is consider throw away when virtualising, hardware abstraction is half the point.

If uptime is key, don't run on a single host and have enough capacity to run operations from other machines. Other than the HCL there's lots of reasons to use commodity hardware.
I don't follow the logic. A server is a server, and every time a part fails, the last thing I need in my day is to have to sort out a replacement. That's why support agreements exist.

It's not about the up-time: my clients run systems with multiple-redundancy built-in because the uptime is so critical, it's about the staff time. When you have hundreds or thousands of servers like most places I've worked, there are parts failing literally every single day, not unusual for multiple things to break in one day. Sometimes it can be almost a full time job just dealing with the vendors to get them to come in and fix the problems, so anything more hands-on (i.e. sourcing your own spare parts) would be cost-prohibitive from a staffing perspective. And why bother? It's a fiercely competitive market, where the big players have squeezed the fat out of the entire chain down to a very fine level.

Also, even if you have got multiple levels of redundancy (your example of running multiple ESXi hosts), a dead server equals reduced levels of redundancy while the hardware is sorted. You may not have downtime, but you are more highly exposed to the risk of a 2nd or 3rd failure. So the quicker you can get that server back up and running, the better. And don't forget that servers have all sorts of redundancy built-in anyway, so it's rare for an entire server to just drop dead. It's all about risk levels and risk management.

IT these days is a pretty mature business, and hardware warranties are definitely not something that is ever up for debate, not in the types of industries that I work in, anyway.
 
Has anyone tried connecting SAN storage to ESXi using RDM. Any good / bad points to consider ?.

Yep, have a few VM servers with RDMs, but only our business critical boxes that are running MSCS.
I don't have any performance stats of my own, but everything I've read says that the performance benefits are absolutely minimal.

One seemingly little known thing with RDMs, and may be causing your slow boot, is you have to set the perennially reserved flag for any RDMs.
When we upgraded to ESXi 5.1 from 4.0 boot time went from <10 minutes to 45 minutes before we put in place this fix.

Another thing to note if using RDMs and clustering is that multipathing is not supported, so you do lose some redundancy and potentially performance too.

TLDR: There really is little reason to use RDMs unless you are doing Microsoft clustering across multiple ESXi hosts.
 
Hi all,

I've been re-directed to this thread after starting one requesting help with setting up a vmware rig @ home, I'm just going to copy/paste my original post here:

Hoping someone can advise me of the easiest option here. I want to setup a VMware test enviroment for learning / studying for my VCP5-DCV exam. I have a reasonable PC, with plenty of ram. I'd like to have both a home test environment, and maybe a guest or two for actual home use.

Would I be best running free esxi5.1 on the physical host, then running a nested environment within it, or chuck on 2008R2, and launch the VM environment via VMware player? Is it possible to run a nested setup via VMware player?

Advice please?
 
New to all this ESXI stuff, but, after a quick read through this thread, it my be the answer I need.

Curious about a few things:

How many nics are needed?
Can two OS's be run simultaneously, for example Windows 8 and Server?
Can the HDD's be configured as shared drives independent of OS ? (SAN, maybe)

I'm thinking of changing my HP Microserver from a desktop to desktop/server, as I want to run a media server but be able to use the same hardware as a windows 8 desktop...

Any help would be great:D
 
How many nics are needed?
Just 1, as long as it's on the compatible list on VMWare's website

Can two OS's be run simultaneously, for example Windows 8 and Server?
Yep, that's exactly what virtualisation does. Allows you to use the same hardware for multiple machines simultaneously.

Can the HDD's be configured as shared drives independent of OS ? (SAN, maybe)
As in use a single volume (with the same files etc) on 2 seperate VM's? You can, but I can see it ending in a mess.
You'd have to explain your requirements more, but I suspect you would just want a server VM (Probably Linux) with a file share that just stays on and is available to all other machines on your network.

And replying to both your and the previous posters questions: ESXi is headless, if you connect up a monitor to it all you see is the server name and IP, and some options for configuration, you cannot output the screen of a guest to a directly connected screen.
To view the output you install the vSphere client on another machine and use that.

So if it's your only machine, then ESXi is not for you. You would need to look at something like VMWare workstation.
 
I think most enterprise customers would settle for SSO not being a total mess, especially when upgrading.
A few nice bits have been added, increased VMDK sizes, Round Robin on RDM's, and SRM improvements.

As ever, I'm waiting for vSphere 5.5 Update 1 before I go near it.
 
How many nics are needed?
Just 1, as long as it's on the compatible list on VMWare's website

Can two OS's be run simultaneously, for example Windows 8 and Server?
Yep, that's exactly what virtualisation does. Allows you to use the same hardware for multiple machines simultaneously.

Can the HDD's be configured as shared drives independent of OS ? (SAN, maybe)
As in use a single volume (with the same files etc) on 2 seperate VM's? You can, but I can see it ending in a mess.
You'd have to explain your requirements more, but I suspect you would just want a server VM (Probably Linux) with a file share that just stays on and is available to all other machines on your network.

And replying to both your and the previous posters questions: ESXi is headless, if you connect up a monitor to it all you see is the server name and IP, and some options for configuration, you cannot output the screen of a guest to a directly connected screen.
To view the output you install the vSphere client on another machine and use that.

So if it's your only machine, then ESXi is not for you. You would need to look at something like VMWare workstation.
Thanks for that , that's a huge help


The thing for me is I want to run windows 8 via a monitor and server in the background for media and storage, but, i'll need to use the windows 8 side of things as a regular pc.

I'll need to be able to turn on the Microserver and have both start automatically and boot to windows 8 as normal. it would be too much for my wife to understand... lol

Was thinking storage wise of using a 250gb split 50/50 for the OS's and 3TB+ for the storage and backups.
 
What other devices do you have that are actually consuming the media you want to share?

There isn't anything stopping you from just creating a fileshare on Windows 8 and having that available, or installing Plex Media Server and having that run as a service in the background.
In this case I don't think ESXi, or indeed any virtualisation, is what you need to best achieve your aim.
 
Back
Top Bottom