I think the driving force behind the current RAM shortage is that OpenAI alone is buying up something like 40% of the unprocessed wafer output (probably for their Sora service), and I suspect a good chunk of the remaining 60% is being bought up by other AI companies. They're essentially hoarding unprocessed RAM with the intention that they'll use it in the future for new datacentres.
AI has some good uses, for example in medical and scientific areas, but I think it's just wrong that companies are buying up RAM so that people can generate cat-themed slop content, when this RAM would be much better off being used in personal systems for things such as gaming. I believe AI video generation needs significantly more RAM and GPUs than what things such as ChatGPT do. You can run a basic open source AI chatbot model such as the 12 billion parameter version of Google's Gemma 3 offline on just 16GB of RAM, and it performs about as well as what the original version of ChatGPT did. The current ChatGPT model (which is proprietary so the exact details aren't known) likely needs around 1-2TB of RAM, but if you factor running on a server with a few powerful GPUs it can probably process lots of requests from different users at once. AI video generation meanwhile is significantly more resource intensive - I had a look at one of the most basic AI video models that was capable of running offline on 16GB RAM and it appeared it would take probably days to generate a shortish clip on the Apple M3 processor. By comparison the 12 billion parameter Gemma 3 model can output a reply to a request on the base Apple M3 within a few seconds.
Personally I feel that there needs to be legislation passed to prevent AI companies from preemptively hoarding wafers that they don't plan to immediately use. This would help ease the current RAM shortage and do more to help protect consumers from short supply and price gouging. Unfortunately given the current state of things, this seems unlikely.