Can you list a few example resellers please.
Thanks
That may be breaking the forum rules. Might want to check with a mod first.
Can you list a few example resellers please.
Thanks
That may be breaking the forum rules. Might want to check with a mod first.
Correct, as it may conflict with the business interests of the shop.
So, the question...
Are there any speed benefits / issues mounting a san share (FC / iSCSI / IB etc) using RDM rather than just mounting as a standard drive and formatting to VMFS ?.
Thanks
RB
RDMs *may* have a minute percentage reduction in latency for things like SQL, but it needs to be tested in your environment.
RDMs cannot be used in conjunction with FT.
If using vMotion/DRS/Storage vMotion, the RDM must be accessible to all hosts (much like shared storage).
VMFS-3 had a 2TB limit, VMFS-5 has a 64TB limit.
Thanks for the feedback. This is for a home setup so using ESXi Hypervisor (i.e the free version).
RB
I don't get who is buying SuperMicro servers. The most important aspect of a server for a business is the support. I guess if you are buying SuperMicro servers from some reseller that also provides 24x7 support, then sure, but again, why? HP/Dell/IBM have refined their support model down to a pretty fine art, and it's not like SuperMicro is cheap!? I just don't get it.
To answer the original question: if it is for work, and downtime matters, do not buy a white box; always go for a supported solution.
I don't follow the logic. A server is a server, and every time a part fails, the last thing I need in my day is to have to sort out a replacement. That's why support agreements exist.It's a little bit ironic that you argue so strongly about manufacturer hardware support and SLA being critical in a virtualisation thread; sure it has to be on the HCL to be supported by VMware but otherwise the physical kit is consider throw away when virtualising, hardware abstraction is half the point.
If uptime is key, don't run on a single host and have enough capacity to run operations from other machines. Other than the HCL there's lots of reasons to use commodity hardware.
Has anyone tried connecting SAN storage to ESXi using RDM. Any good / bad points to consider ?.
Thanks for that , that's a huge helpHow many nics are needed?
Just 1, as long as it's on the compatible list on VMWare's website
Can two OS's be run simultaneously, for example Windows 8 and Server?
Yep, that's exactly what virtualisation does. Allows you to use the same hardware for multiple machines simultaneously.
Can the HDD's be configured as shared drives independent of OS ? (SAN, maybe)
As in use a single volume (with the same files etc) on 2 seperate VM's? You can, but I can see it ending in a mess.
You'd have to explain your requirements more, but I suspect you would just want a server VM (Probably Linux) with a file share that just stays on and is available to all other machines on your network.
And replying to both your and the previous posters questions: ESXi is headless, if you connect up a monitor to it all you see is the server name and IP, and some options for configuration, you cannot output the screen of a guest to a directly connected screen.
To view the output you install the vSphere client on another machine and use that.
So if it's your only machine, then ESXi is not for you. You would need to look at something like VMWare workstation.