Hello there! Thanks for sharing about your build - in fact, I've a similar ESXi build to yours in my plans, with an AMD 1900X and the same ASRock X399 Taichi. If you don't mind me requesting, perhaps you could have a simple summary of the major issues you had and the fixes for the issues? Did you manage to get SATA working out in the end? I'm not one who has a spare NAS to work with so I'd consider it to be pretty high in my "required to be working" list, along with pcie (GPU and other pcie peripherals) and usb pass through, which I see you have managed to get running.
Hey buddy, I can indeed. The build at this point in time isn't a straight forward plug and play jobbie as I expect it will be with ESXi updates. There are certainly issues as you point out some of which I am yet to find a fix for. A brief summary of the issues in the order in which you are likely to hit them (caveat to this is using the following install files, VMware-VMvisor-Installer-6.5.0.update01-5969303.x86_64.iso, the rollup iso cannot at this point be used to install esxi in threadripper as the rollback scripts roll back to the driver used in the first iso which as we know at this point don't work):
1) On install the installer hangs on vmx_ahci, To get past this issue I rolled back the AHCI drivers. With the rolled back drivers I can see AHCI controllers and even select them for pass through but cannot see the disks in ESXI to use as datastores. M.2 Disks do not have this issue and are working a treat in esxi. Be warned that onboard sata raid is also probably not an option unless supported by esxi which I am yet to test but can certainly build a quick array of spare 1tb drives destined for the Members Market to test.
2) The installer hangs on xhci_xhc on install, I suspect that this is because the usb 3.1 Type C support on the chipset, again I rolled back the drivers and am currently running them at USB2 speeds.
3) GPU and PCI pass through, all testing indicates that this just works, well kinda, if using AMD cards in terms of GPU, as for other pci devices I see no reason why the same would not apply. I have not tested this on NV GPU's but could if required, I see no reason why they would be any different. Some things to be aware of here, pass through of your only gpu will result in the esxi loader looking like it has crashed on boot and this means no direct console to that host. It hasn't crashed! and if you look at where it appears to have crashed you will notice that it crashes as the gpu is passed through. Luckily by this point I had enabled remote management and SSH on my host so opened up a shell and confirmed that the host was in fact up and loaded. Next up I used the web client which BTW I absolutely hate compared to the old 5.5 vsphere (which still works up to V6.0) and logged into the host over the web and everything was golden. I fired up the VM with the GPU passed through and tried the web console which wouldn't work for the VM, again thinking something was up and having already enabled RDP I was able to log onto the machine via that method and check that all the hardware was installed and it was, at this point I needed to get a bit creative as many GPU tasks simply don't work over RDP so I installed VNC, connected that way and installed the AMD drivers. A bit of a faf but working. It looks to me like the vmware virtual adapter which I believe is required for the console gives it's duties over to the new adapter. I am convinced I can fix this but haven't done more testing simply because I haven't had the time.
To summarise, so far everything I have works save using SATA devices as datastores, which is a pretty big deal and probably a deal breaker for most. Because I had the NAS and a schedule with something to finish I haven't invested the time to properly fix this but think that I can and at this point in time have a couple of methods to try. Something I did find very interesting was poking around in the bios a few days ago I noticed an option to run the SATA as a different device ID (I'll dig more into this later), secondly I have some contacts in VMWare who I promised to send info to but haven't yet as BT have been consuming my days for all the wrong reasons. I suggest though that I will be able to resolve this issue.
Hopefully some stuff in here will help, I mentioned a threadripper iso previously which is still not off the cards, I do also know that all of these issues are 100% correctable but might involve some deep dives into incorporating device drivers or community supported vib files into the build which in reality is fine for a home lab but in production you would have to be slightly mental or perhaps slightly sadistic to run.
I do intend to finish the linux piece for Methanoid and will dig deeper into the SATA issue but work and life have been getting in the way and I don't have as much spare time as I would like to do these things. I do intend to do a few more updates though including some on the Forti, NAS and UPS and I will of course keep updating this with progress when I get around to trying to resolve other issues.