This is pretty outdated advice. The correct answer is AM5. AM5 has got two more CPU upgrades, offers support for 256gb of RAM and will pan the Mac into next week for significantly less money.
I mean, I’m happy to be wrong if AM5 now easily supports 256 gigs of RAM at moderate to good speeds.
My understanding is that it’s difficult to run a lot of AM5 systems with that much RAM and the system will often only run that RAM at 3600 MT/s.
I’ll do some digging but I don’t think the Mac Studio as an AI machine is a bad idea but I’m happy to be wrong if there is a better solution.
Edit: I was wrong, the Mac Studio supports 512gigs of RAM, not 256.
Edit Edit: yeah, I'm going to stand by my statement of using the Mac Studio as a dedicated AI/LLM machine for intense use since you can allocate the unified RAM to either system or GPU memory and, therefore, hold massive AI/LLM models which is more relevant than your AM5 rig having 256gigs of RAM but not being able to allocate that to the GPU rendering and relying on a dedicated GPU like the RTX 5090 with much less RAM.
You can split the model across RAM and VRAM as per
this article but it's not amazing.
Here's another piece on Apple Silicon for AI/LLM use. There was a video from Level1Techs on an AM5 prosumer rig with 256gigs of RAM and I'll try find that as a source as I think he mentioned AI/LLM usage with it.
If all you want to do is play around with small models that can fit in 16gigs of VRAM then fine, any AMD or Intel system will do since you'll be using a dedicated GPU anyway.