The (un)Official VMWare ESXi thread.

Surely this is exactly what raw disk mapping is for? Ok you need to do it over SSH but it's not that tricky to setup. This guide should get you started but use "vmkfstools -z" instead of "vmkfstools -r" (cite: end of this thread).

I use this process for my media server, passing through numerous 2Tb and 3Tb drives. Works well for me :)

I set this up with a new Intel 520 120GB SSD for my NewzNab VM and after getting over it not working at all then working out you need to create the mapping file on another drive, not the same drive it all seems to have worked. The drive was bare unformatted so I could not verify any data remained intact but the only time I needed to format was in the OS of the VM, ESXi did not ask me about formatting at all. Seems to be working fine.

[RXP]Andy;23422083 said:
VMware have released a 5.1 update. KB: 2035775

I wonder if this fixes the broken pass through problem.

EDIT: Nope, its not fixed my USB PCIe passthrough.

The fix for passthrough was meant to be included in the 2nd update, not the first. I completely forgot about the PCIe passthrough issues and passed a nic chipset to a VM and got the PSOD. Finally twigged what was going on and unmapped it and the server was fine. Luckily this is the only VM I had running at the time and I had not yet installed the OS.

Strangely the following is listed as being fixed in this patch;
PR924167: An ESXi host stops responding and displays a purple diagnostic screen when you attempt to power on a virtual machine that uses a pass-through PCI device. For more information, see KB 2039030.

RB
 
Storage chipsets and nic chipsets tend to be the biggest issues.

At worst you could get an Intel CT (PCIe) or GT (PCI) nic for a few pounds to sort that side out.

RB
 
May be OK if I get another NIC.

http://t.co/663VH269

Assuming this is a dedicated ESXi box, I would get a Intel GT nic and a PCI video card and then save the PCIe x16 slot for a storage controller just incase you wish to expand at some point.

I use a PNY nVIDIA GeForce 8400 GS 512MB DDR2 PCI Video Card W/ DVI out P/N VCG84512SPPB (around 30 quid if you hunt around). There are also some PCIe x1 cards floating around that you may want to look at. The Intel GT you can pick up for around 10-20 quid second hand.

RB
 
Its not enterprise class. Go for HP. Get a nice Gen 8 machine and you will be laughing

Sorry, but would disagree with this.

Supermicro products are designed and built for enterprise environments and use enterprise componants. What they do not do is provide the support services for 2h/4h/NDB etc warranty replacements.

Dell have used Supermicro boards in their products in the past (rebadged). One off the top of my head is the Dell C6100 (initial build) used a Supermicro X7 board.

Having said that, if experience of building ESXi servers is low then a ready built solution (HP/Dell/IBM) would be an easier way to go.

For a whitebox, the Supermicro X9SCM-iiF is a great E3 board ofr ESXi with the iiF variant using chipsets for the NICs that are fully supported by ESXi (the original didn't).

I would suggest load testing some hardware before fully commiting. Disk IO and ram are usually sticking points for virtualisation.

RB
 
The bit in bold is why I don't consider them enterprise class. ;)

You would say the same of Intel then ?.

Ok, I get your point but I would just split it up and say the hardware is enterprise class but there is no backup support services above the standard RTB warranty :D.

RB
 
ha. :) Next business day is critical in my opinion. I guess it all depends how much your downtime is worth to you. Look at this experience with Supermicro servers and support (granted there are many GOOD experiences but when someone has had a nightmare its worth finding out what can go wrong)

Will read when home. I am sure it will be quite interesting.

For my part I sell both (also IBM and Intel). I also have a HP server (ML110 G7) and currently 5 dell servers at home (C6100 4 node, dual processor cloud servers) for a Hadoop / Lustre cluster I am building.

The only unit that has failed is the HP ML110, but HP came round the next business day and replaced the processor and it was back up again. I also had a Procurve fail and that was also replaced the next business day. I cannot fault HPs service.

The only Supermicro issue I have seen was where a customer incorrectly flashed and bricked their motherboard board. The Supermicro distributor took the board back and replaced it without charge. It did take a while though. Now this board was a Q67 model so not enterprise level and not a big seller so they had no local stock and had to send it to the manufacturers.

In an Enterprise environment, support counts for a lot but you can get thrid party support packages and link them with vendor hardware if needed. I would happly deploy Supermicro servers but I would make sure there was an adequate support contract to back them up for a business environment.

Oh, not sure I would class an E5-2420 on the same level as a E3-1290v2. Passmark has the E5-2420 closer to the E3-1230v1 and the E3-1290v2 closer to the E5-2640.

The initial pricing of the HP is very good in that link but adding the drives that are also covered by the same warranty and support is likely to double the initial cost if not tripple it. The pricing where I am is much higher but then the DL360s are the 'p' varient and come with the 26XX processors. Still around twice the price though...

RB
 
I don't get who is buying SuperMicro servers. The most important aspect of a server for a business is the support. I guess if you are buying SuperMicro servers from some reseller that also provides 24x7 support, then sure, but again, why? HP/Dell/IBM have refined their support model down to a pretty fine art, and it's not like SuperMicro is cheap!? I just don't get it.

To answer the original question: if it is for work, and downtime matters, do not buy a white box; always go for a supported solution.

The UK and US are not the only markets and if I can build a Supermicro server for half the price of a HP offering and money is important (small business / startup) and who has access to an IT person and can live without a server but use a backup PC for a few days then 50% cost saving can be spent elsewhere.

THere is a place for them but where I am, the distributors handle the RMA so I don't have to deal with Supermicro direct.

Now just don't get me started on Norco :mad:.

RB
 
Is that true, HP hot swap smart drives - it has to be HP HDD model? I can't use any other HD like WD?

HP (and the other vendors like IBM, Dell etc) warranty and support products bought from them. You can buy and use third party parts (non-HP compatible ram, hard drives etc) but they will not be warrantied or supported by HP and may even invalidate the warranty on your unit (i.e. opening the unit to install thrid party ram). In some countries there is more leeway from the companies but it is best to check.

Put it this way... If I bought a car from you and changed the seats to racing seats from another company and one broke, would you expect me to come back to you for a replacement ?.

This is why the items like hard drives etc from the big players are usually much more expensive. You are paying for the item, the warranty and support.

RB
 
Has anyone tried connecting SAN storage to ESXi using RDM. Any good / bad points to consider ?.

I have a 20Gbps Infiniband link between a Solaris SAN and my ESXi 5.1 (free) server. I would prefer to connect via RDM so the OS and files are on the disks in their native format rather than on a VMFS file 2TB max file. I have tried this with the SRP (SCSI RDMA Protocal) and used RDM to mount the target and it works fine within Windows 2012 but I am getting very slow speeds (10MB/s) copying file from my workstation to the server. I am also seeing the same speeds with a SRP target mounted on the ESXi server (non-RDM) and formatted to VMFS. I am half suspecting it is an issue with my workstation as boot times are now very slow for a machine based on a Intel 520 SSD but have not yet found the time to really find the problem.

So, the question...

Are there any speed benefits / issues mounting a san share (FC / iSCSI / IB etc) using RDM rather than just mounting as a standard drive and formatting to VMFS ?.

Thanks
RB
 
RDMs *may* have a minute percentage reduction in latency for things like SQL, but it needs to be tested in your environment.

RDMs cannot be used in conjunction with FT.

If using vMotion/DRS/Storage vMotion, the RDM must be accessible to all hosts (much like shared storage).


VMFS-3 had a 2TB limit, VMFS-5 has a 64TB limit.

Thanks for the feedback. This is for a home setup so using ESXi Hypervisor (i.e the free version).

I tracked the slow speeds down to a some sort of network issue. Bypassing one of my routers corrected the problem but I have not yet worked on finding what on the router / cables is causing a drop to 100/Mbps.

RB
 
I put ESXi 5.5 on 3 nodes of a Dell C6100 and all seemed fine. I then moved the boot drive (with ESXi installed on it) to another node and installed to the original node again on a fresh drive (one nodes IPMI password is unknown hence the drive shuffling).

As this is the free Hypervisor and not a paid version I have to use the vSphere client. I ran up 4 copies which worked on two of the nodes fine but the 2 nodes I swapped the drives on kept loosing connection .... quite possibly not related to ESXi 5.5 specifically but quite odd, almost as if they both had the same IP address (they didn't, I set a static IP for eack node).

I created a CentOS VM, de-personalized it and then cloned it so there were 4 VMs on each of the two correctly working nodes and all started up fine.

Apart from the drive shuffling vSphere issues, no others seen so far. Not really given it a good test though.

Not sure about the comment above on 3TB drives and RDM not working but I am mounting an Infiniband 8TB share and mapping to it via RDM for my Win Server 2012r2 Essentials server and it si working fine.

RB
 
Does the ESXi installer generate a unique ID based on the MAC address or some other "unique" aspect of the original motherboard, and that's why you're having problems with those two hosts? I've never come across this (because I've never done it).

This was my thought as well. Shame but not a show stopper.

RB
 
Back
Top Bottom