NAS/ESXi consolidation server build - Intel or AMD?

Soldato
Joined
1 Oct 2006
Posts
14,347
Hi,

I'm just about to embark on building a new home server to consolidate a couple of devices, give me a bit more processing headroom and give me some more expansion options (10Gb etc).

Current setup is an 8th Gen NUC running ESXi and 5-6 VMs (2x Docker, PiHole, Windows test host and a few other), and a Synology DS1813+ fully populated with WD Reds.

I'm looking at a box that will bring all the above together and for the most part I've been set on a B550/R5 3600 setup. However after doing some more reading about the way PCIe lanes are setup and distributed I'm wondering if Intel 9th/10th generation Z390/490 might be a better shout considering PCIe lanes.

I intend to run 1, possibly 2 NVME drives as well as a GPU, HBA and quad port NIC. Eventually the latter may turn into a 10Gbe NIC, and I'll use the GPU for passthrough to a VM. Probably only a 1050ti/1060 that sort of thing.

Thoughts - Intel or AMD for the chipset?

Thanks!
 
I would say Intel is still the top dog within Enterprise at the moment. But AMD will work just as well, all about stock versus cost versus performance metrics that you want. Intel will have better stock I suspect.
 
That's kinda where I'm leaning. The i5 10400 is decent when paired with a Z490, and the cost is a smidge under a decent B550/R5 3600 setup given the current prices. edit - I take that back, Z490 is needed to get the lanes, and a decent Z490 board with the PCIe slot configs I need is silly money compared to the CPU. Meh.

I'm a massive fan of Ryzen for desktop and gaming, but Intel feels a little better suited to this application.
 
Last edited:
I don't think you'd have a problem with AMD - but depends on your OS really. I would recommend sticking with the HCL if going for one of the BSDs.

If using something derived from a recent debian, Ubuntu, or fedora then AMD is OK (and depending on the board you choose you could end up with more PCIE lanes too).

Edit: I don't think Ryzen is on the HCL for ESX but maybe someone else here has tried it..
 
Other than at launch, Ryzen has been fine on Linux/BSD for a while, no issues with ESXi 6.x on my first gen x370/1700/64GB set-up, not tried the 3600 on it, but I wouldn’t anticipate issues at this stage from anything I’ve seen/read. I personally haven’t moved to 7, it drops HCL on a number older hardware items and that’s problematic for me, also it gets quite picky about the stepping of some of the newer Intel NIC’s on one of my test servers and I couldn’t be bothered to argue with it at the time. Similar story with Proxmox on Ryzen if that’s your thing.
 
Good feedback thanks. I'm planning on using an LSI 9211-8i and a i350-t4. Both of which I suspect are happily on the 7 HCL.

Stepping thing is interesting, I shall have a read up. The board I'm using has 2.5Gb Realtek NICs but I have low expectations for support and stability hence the i350.
 
Before going any further, check HCL. HBA is a no from memory, the i350 should be a yes, but for example Intel Pro stuff wasn’t and my i211 (while HCL) had a chipset revision that wasn’t, you can of course build a custom image or use a workaround to install on unsupported hardware, but in my case the NIC’s, fusion.io and HBA all needed attention and I was packing to move house, so the cloud was an easier option. Not sure I am in a hurry to move everything back now.
 
Not on HCL doesn't mean it can't be made to work, just that it's gone through depreciation and now isn't supported, the older Intel NIC's for example are quite easy to coax into life, but some of the new advanced queueing doesn't perform as it did under 6.7. Proxmox is another option, it’s less stringent in its hardware support, but no use if you are trying to learn ESXi or replicate environments from somewhere that uses ESXi.
 
Managed to get some time to put into this today, so the good news is it appears that native ESXi 7 support for the 9211-8i isn't going to be a problem as it quite happily passes through to a VM. Happy days.

The problem I do seem to have is that I can only passthru a device that's in the top PCIe slot, assuming this is because it's directly plumbed into the CPU rather than going via the NorthBridge. I've been through the BIOS and enabled IOMMU, ACS, AEP CS and a whole host of other stuff and things I've found but to no avail. I get the same error messages in the vmkwarning.log:

Code:
2020-12-27T14:59:35.171Z cpu8:2097579)WARNING: PCI: 945: 0000:04:00.0: Cannot change ownership to PASSTHRU (non-ACS capable switch/root in hierarchy or ACS not enabled  on multi-function device)
2020-12-27T14:59:38.483Z cpu0:2097591)WARNING: PCIPassthru: PCIPassthruAllowed:175: Device passthru not possible on this system (no PCI ACS support)
2020-12-27T14:59:38.769Z cpu7:2097646)WARNING: PCIPassthru: PCIPassthruAllowed:175: Device passthru not possible on this system (no PCI ACS support)
2020-12-27T14:59:39.198Z cpu8:2097646)WARNING: PCIPassthru: PCIPassthruAllowed:175: Device passthru not possible on this system (no PCI ACS support)
2020-12-27T14:59:39.265Z cpu8:2097646)WARNING: PCIPassthru: PCIPassthruAllowed:175: Device passthru not possible on this system (no PCI ACS support)
2020-12-27T14:59:39.332Z cpu8:2097646)WARNING: PCIPassthru: PCIPassthruAllowed:175: Device passthru not possible on this system (no PCI ACS support)

Swap the device into top slot and it works fine. I can get by with this config, but I was hoping to add a GPU at some point for passthru also so I will need an additional PCIe slot. :(
 
Now why the hell couldn't I find that yesterday, grr. I ended up sending the B550 back and getting an X570 board in the end. Thanks for the Google-Fu, much appreciated @Liquidfox
 
21 for me in April, although I stray further from hardware each year.

One day you're racking Sun gear in a data centre, then you're virtualising Domain Controllers on hosted blade centres, then before you know if you're provisioning containers in a public cloud in a country you've never even been to lol.
 
Back
Top Bottom