• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Casually Announces Machine Learning FSR

Soldato
Joined
30 Mar 2010
Posts
13,245
Location
Under The Stairs!
AMD have very quietly announced ML FSR is coming to Call Of Duty Black Ops 6 via their Meet the AMD Ryzen 7 9800X3D Processor vid.

Kst8Je2.png


 
It says nothing that this is coming to PC as this could be just a feature for the PS5 Pro and/or eventually RDNA4 :confused:
 
If they don't bring it to RDNA 3, then I will be sorely disappointed, as I think it could definitely be used by it and anything else would just be a marketing gimmick for selling the new GPUs
 
If they don't bring it to RDNA 3, then I will be sorely disappointed, as I think it could definitely be used by it and anything else would just be a marketing gimmick for selling the new GPUs

RDNA3 doesn't have deep learning learning cores.

All it has is a WMMA instruction set which is dubiously AI. It's AI in the sense the GTX 1080 Ti could do ray tracing without RT cores.


There is a reason that Sony had to create their own chip. RDNA3 lacks real AI cores.

The video in OP, sounds more like they are hoping to use CPUs for ML FSR. I guess at least that will be universal.
 
Last edited:
RDNA3 doesn't have deep learning learning cores.

All it has is a WMMA instruction set which is dubiously AI. It's AI in the sense the GTX 1080 Ti could do ray tracing without RT cores.


There is a reason that Sony had to create their own chip. RDNA3 lacks real AI cores.

The video in OP, sounds more like they are hoping to use CPUs for ML FSR. I guess at least that will be universal.
Well, as long as it's good and I can use it on my 7800xt, I'm happy
 
RDNA3 doesn't have deep learning learning cores.

All it has is a WMMA instruction set which is dubiously AI. It's AI in the sense the GTX 1080 Ti could do ray tracing without RT cores.


There is a reason that Sony had to create their own chip. RDNA3 lacks real AI cores.

The video in OP, sounds more like they are hoping to use CPUs for ML FSR. I guess at least that will be universal.

Sony didn't design the ai chip,

The soc used on the ps5 pro is most likely a heavily modified version of AMD ai300 processors.


Your right the cores aren't deep learning cores, but they are possibly what AMD are most likely to use as that instruction set is able to be used across GPU brands (Nvidia at least), which would help with the open source nature of fsr.

At least that is the presumption as they are discussing way before the rdna4 launch, unless they plan on implementing the ai fsr to operate via the processor.

The crux of it is no one really knows other than AMD and it's all just speculation.
 
RDNA3 doesn't have deep learning learning cores.

All it has is a WMMA instruction set which is dubiously AI. It's AI in the sense the GTX 1080 Ti could do ray tracing without RT cores.


There is a reason that Sony had to create their own chip. RDNA3 lacks real AI cores.

The video in OP, sounds more like they are hoping to use CPUs for ML FSR. I guess at least that will be universal.
"AI" cores don't exist, "deep learning cores" don't exist.
You have instructions and data types. Whatever silicon can fetch, execute and store the most instructions on the most data will get the best performance. The data types are heavily dependent on the model in question. You also have software considerations, i.e. how well can you turn your code into some executable and feed that silicon.

The point is AMD doesn't lack "AI cores", RDNA3 has less low precision compute than Ada. That's it. How NV go and implement these instructions is up to them, AMD can make their choices as well, but marketing terms are exactly that.
AMD does have some staggering low precision compute in their CDNA MI300X and MI325X and the upcoming MI350X.
 
Last edited:
A CPU can do machine learning by your definition. It genuinely can for the record because they are general purpose (my abacus probably can, i need to get an abacus for this comment to work).

AMD have not in RDNA3 built purpose designed deep learning accelerators that will perform at the level required to make them useful. In XDNA they have dedicated silicon.
 
Last edited:
A CPU can do machine learning by your definition. It genuinely can for the record because they are general purpose (my abacus probably can, i need to get an abacus for this comment to work).

AMD have not in RDNA3 built purpose designed deep learning accelerators that will perform at the level required to make them useful. In XDNA they have dedicated silicon.
Yes, a CPU can do ML. I do linear regression all the time using only a CPU.

AI and deep learning in particular needs huge compute and bandwidth to be viable though, hence parallelism that comes from GPU's.

AMD's CDNA don't have a whole separate block of silicon that has AI cores though. CDNA has compute units very very similar to RDNA with extra transistors spent to support for AI focused data types and instructions.

AMD are now re-combining the RDNA and CDNA presumably because local inference workloads are becoming a big thing and gaming is introducing more AI workloads: https://www.techpowerup.com/326442/...ular-gpu-architecture-similar-to-nvidias-cuda

What AMD do not have today is the software stack to support deep learning, especially training. This is because Pytorch (and other AI libraries) are heavily dependent on CUDA. ROCm is coming along though, and you can now run most inference workloads on AMD gpu's trivially.
 
Last edited:
The ML you do on your CPU is computationally simple, and is not on par to scoring the neural nets required to do what DLSS and XeSS does in milliseconds. Linear regression has little to no link to FSR.

So let's keep the discussion in context.
 
Last edited:
Back
Top Bottom