• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD GPU's for AI (LLM's, Stable Diffusion etc)

Associate
Joined
5 Jan 2025
Posts
26
Location
The Moon
Does anyone have experience of using AMD GPU's for offline AI?
I'm currently running a 10gb RTX 3080 in my SFF living room PC connected to a 4k LG OLED TV. I use the PC for a mix of media consumption, gaming (recently discovered Helldivers 2), and some AI (mostly text-generation in LM Studio). The whole thing runs seamlessly, but I'm ready to take things up a notch. I'm willing to consider the upcoming 9070 XT, given its 16gb's of VRAM and (potentially) 7900xt/4080 levels of raster (as well as the improved RT, which would look great on this TV).
My one concern are the number of conflicting reports I have read online about people struggling to get LLM's running on AMD.
Anyone here that can share their experience of AI on AMD?
 
ROCm is something to look into, I do know it has some AI capabilities (not something I've ever looked into tbh).
Thanks for the reply. I have looked into it but it's hard to get a consensus answer. Was hoping people here might have some real-world experience they can share. It's going to be a decisive factor in my next purchase but I don't want to pay over the odds for more VRAM!
 
My exp is with SD w/ an RX6800. I can tell you that it's an absolute pain to deal with on windows and you're way behind Nvidia not just in perf but also in simply getting things to work. I would absolutely avoid AMD like the plague for ML. It's for me another important factor for why I'm jumping to Nvidia with 5000 series.
 
My exp is with SD w/ an RX6800. I can tell you that it's an absolute pain to deal with on windows and you're way behind Nvidia not just in perf but also in simply getting things to work. I would absolutely avoid AMD like the plague for ML. It's for me another important factor for why I'm jumping to Nvidia with 5000 series.
Thanks, Poneros. This confirms my fears about going AMD. Now if only I could get a decent amount of VRAM in a SFF! Probably going to have to go for a used 4070ti Super if the 5070ti is more than about 700 quid.
 
My exp is with SD w/ an RX6800. I can tell you that it's an absolute pain to deal with on windows and you're way behind Nvidia not just in perf but also in simply getting things to work. I would absolutely avoid AMD like the plague for ML. It's for me another important factor for why I'm jumping to Nvidia with 5000 series.

My experience is the complete opposite. visit https://www.amuse-ai.com/ offline image generation works out of the box with a one click install no problem at all. No worrying about dependencies from github etc. , all your stable difusion models etc are there working fine. Even better is that if you have a integrated amd APU you get to utilise both the NPU & GPU & more importantly your sytem ram.

I've had 0 issues at all with the APU regarding speed or compatibility with ai apps. (there are plenty of legacy cuda apps that will likely cause problems though)

(fixed wrong url)
 
Last edited:
I've found LM Studio runs fine on a 5700G system, utilising the onboard GPU (although it's probably no faster than CPU only with the slow system memory).
Haven't dared try to run SD on it. :D

I know prompt processing used to be a bit slower on AMD, but no idea if that used to be the case. Worth looking on the localllama subreddit. There are a few people using 7900XTXs there. Ultimately, higher-end Nvidia cards are still the ones to have.
 
I've used SHARK (Stable Diffusion) on a 7900XTX, and my experience is that it mostly* works, and is fast.
* Some releases have been broken on AMD cards, and I had to revert to a previous version.
 
Last edited:

I was using a different version from the same programmer last year. Just installed this and it feels familiar but faster. This is stable diffusion don't know what LLM is.

"* Some releases have been broken on AMD cards, and I had to revert to a previous version."

This is what put me off using the version from last year. I should think it will be more stable now since zluda uses RoCM.
 
Back
Top Bottom