• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD ROCm comes to dekstop (AMD using cuda, sort of)

Soldato
Joined
17 Jun 2004
Posts
7,701
Location
Eastbourne , East Sussex.
I guess in the same spirit of this thread with the release of Blender 3.6 earlier this week AMD had finally got hardware RT working in cycles with some caveats.

From my early testing the results varied from 9% - 35% depending on materials present in the scene.
 
Moore's law is dead said it best when he said


Oh they care, of course they care, CUDA was the thing that Nvidia had which made Nvidia the only choice, now there is another choice.
 
Last edited:
Erm, ..

Oh they're bringing it to windows!

Wondered when they was going to finally do that. Seemed odd to me they had'nt done it earlier.

Version 5.4.3 there on so not as if it's new .
 
The AMD instructions on the main repo and the video linked above recommend using this fork https://github.com/lshqqytiger/stable-diffusion-webui-directml which uses ONNX / directml (microsoft) - so nothing to do with ROCM. It's pretty broken actually, larger images will generally bomb out as there's no support for basic stuff like not running out of memory. It's not worth the hassle currently if you have an AMD card; better off just renting a cloud instance / paying for a subscription to something currently.
 
Last edited:
21 Watchers66 Deviations591 Pageviews

My stats on a website concerning generated images and the like seem to be at odds with your view.

Its not using ROCm, but running out of memory doesn't happen to me unless I use a VAE. No one said stable diffusion was ROCm.

I hear that ROCm is a cuda clone and that the next AMD consumer cards will have it this fall.

Actually all I said was that ROCm will enable stable diffusion model training on AMD cards.
 
21 Watchers66 Deviations591 Pageviews

My stats on a website concerning generated images and the like seem to be at odds with your view.

Its not using ROCm, but running out of memory doesn't happen to me unless I use a VAE. No one said stable diffusion was ROCm.

I hear that ROCm is a cuda clone and that the next AMD consumer cards will have it this fall.

Actually all I said was that ROCm will enable stable diffusion model training on AMD cards.

Just pointing out that that SD build for Windows is not using ROCm, given that this thread is about ROCm. I tried SD on Linux when I had a 5700XT using ROCm and it was not amazing, training certainly wasn't going to work as you had to take the CUDA version and apply a bunch of Shims to ROCmify it.

I don't know what others experience of the fork using DirectML / ONNX is; but it blows up with OOM errors on images larger than 768x768 (and occasionally on smaller ones); or trying to use Hires upscaling. That's a problem with DirectML and there's a bunch of github issues on the repo about it. I have 20GB VRAM and it happens for me.
 
Just pointing out that that SD build for Windows is not using ROCm, given that this thread is about ROCm. I tried SD on Linux when I had a 5700XT using ROCm and it was not amazing, training certainly wasn't going to work as you had to take the CUDA version and apply a bunch of Shims to ROCmify it.

I don't know what others experience of the fork using DirectML / ONNX is; but it blows up with OOM errors on images larger than 768x768 (and occasionally on smaller ones); or trying to use Hires upscaling. That's a problem with DirectML and there's a bunch of github issues on the repo about it. I have 20GB VRAM and it happens for me.

For now it only really works properly with RDNA3 and CDNA3
 
Back
Top Bottom