• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Speculation - AMD isn't going to be able to offer really competitive PC (4K) or console performance until they have GPUs with >100 Compute Units

It'll be interesting to see if we'll reach the physical limits of silicon any time soon and will have to move onto some other material.
I suspect we'll see changes to the design of transistors before the use of other materials, GAA and MBC (gate all around and multi-bridge channel) being the likely successors to FinFET.
 
The RX 7600 desktop card will be based on Navi 33 won't it? Which uses TSMC's 6nm fabrication process. Set expectations accordingly :)
 
Last edited:
I agree with this, i don't think them stopping at 96 CU's is any sort of design limitation, its a choice, it just so happens to be pretty much exactly 300mm?

No, AMD's whole thing is making things efficiently, they are obsessed by it, in a good way, its how they were first to MCM X86 CPU's, 3D Stacking and now also GPU's.
Breaking what would be a large monolithic die in to multiple smaller chunks means you get better wafer yields, there is a certain size at which more of the dies become defective, when you know that you realize that AMD chose to make the logic die 300mm and no larger.

Right now Nvidia's philosophy is more about piling on more and more shaders, making the dies larger and larger, AMD try to not to grow die sizes, instead the grow performance gen on gen by increasing IPC and / or clock speeds efficiently.
I'll give you an example.

RX 5700XT: 2560 Shaders @ 1.9Ghz, 251mm 7nm, 225 watts = to RTX 2070
RX 6700XT: 2560 Shaders @ 2.6Ghz, 335mm 7nm, 230 watts = to RTX 2080Ti

Its grown in size because of the 96MB of L3 cache, but critically they are both on the same 7nm and despite the die size actually growing a bit and being about 40% faster the power consumption is near identical.
That's purely architectural engineering.

They did the same thing with Zen 2 vs Zen 3, very different per core performance between those two CPU's, the power consumption is near identical and they are again on the same 7nm, again just purely architectural engineering.

So will the next GPU get more CU's? Not unless AMD can squeeze them in to the same space, what they will do is find other ways of increasing performance.
There is reportedly an RDNA 3.5 or 3+ in the oven clocking way over 3Ghz efficiently, something that IMO RDNA3 was meant to but something went wrong and they couldn't get it fixed in time before it had to launch, that, whatever it was is now reportedly fixed.

Well you had Navi 23 beating Navi 10 with less shaders and a slightly smaller die on the same TSMC 7NM process node,and much lower power consumption. Navi33 made on a slightly denser 6NM process,is much smaller and probably going to get close to an RTX3060TI(TPU has the RTX3060TI being around 16% faster at qHD). But at 207mm2 is only around 31% larger than the 158MM2 Navi 14(RX5500XT) whilst being over twice as fast. So AMD is making decent improvements but the issue it is quite clear that:
1.)They don't want to make large dies like Nvidia does
2.)They seem to be not wanting to put the best nodes towards dGPUs and would rather make CPUs on them

Basically they are running their dGPU division on a budget IMHO,and not wanting to take as much risk. They are far more interested in making CPUs and APUs. Just look at the Radeon Instinct MI300 APU.
 
Last edited:
Well you had Navi 23 beating Navi 10 with less shaders and a slightly smaller die on the same TSMC 7NM process node,and much lower power consumption. Navi33 made on a slightly denser 6NM process,is much smaller and probably going to get close to an RTX3060TI(TPU has the RTX3060TI being around 16% faster at qHD). But at 207mm2 is only around 31% larger than the 158MM2 Navi 14(RX5500XT) whilst being over twice as fast. So AMD is making decent improvements but the issue it is quite clear that:
1.)They don't want to make large dies like Nvidia does
2.)They seem to be not wanting to put the best nodes towards dGPUs and would rather make CPUs on them

Basically they are running their dGPU division on a budget IMHO,and not wanting to take as much risk. They are far more interested in making CPUs and APUs. Just look at the Radeon Instinct MI300 APU.

Its not making them any money, and they don't think it ever will, not without billions investment and even then the dGPU market is so fickle just not being the correct brand could torpedo literally anything they try.

I've said it before, they have tried, it didn't work, they have given up....
 
Last edited:
Its not making them any money, and they don't think it ever will, not without billions investment and even then the dGPU market is so fickle just not being the correct brand could torpedo literally anything they try.

I've said it before, they have tried, it didn't work, they have given up....

Even Nvidia dGPU revenue for gaming was barely higher than AMD console revenue a few months ago. Sure it's higher margin but the fact is that consoles are much lower risk. During the pandemic it was quite clear AMD was more interested in selling CPUs and console SOCs,instead of pushing wafers towards dGPUs.
 
Last edited:
Even Nvidia dGPU revenue for gaming was barely higher than AMD console revenue a few months ago. Sure it's higher margin but the fact is that consoles are much lower risk. During the pandemic it was quite clear AMD was more interested in selling CPUs and console SOCs,instead of pushing wafers towards dGPUs.
AMD don't get to chose where they get their revenue from, they follow the money.

If there was more money in it for them in dGPU's they would invest more in dGPU's. :)
 
Last edited:
Its like..... what is more important to us? adequate VRam or DLSS?

AMD can't put Tensor cores on their GPU's, its proprietary Nvidia tech, so they do what they can, they put more VRam on and say "hey we don't have DLSS, but our GPU will display your textures correctly, you have a choice"
And we made it....
 
Last edited:
Its like..... what is more important to us? adequate VRam or DLSS?

AMD can't put Tensor cores on their GPU's, its proprietary Nvidia tech, so they do what they can, they put more VRam on and say "hey we don't have DLSS, but our GPU will display your textures correctly, you have a choice"
And we made it....

Other companies integrate similar functionality(the Tensor cores process certain workloads),so there is nothing to stop AMD doing the same.

Edit!!


The company could be paving its way to support advanced artificial intelligence algorithms, such as modern super resolution technologies with their upcoming RDNA3 architecture. AMDGPU is a backend for AMD GPUs for LLVM compiler library, updated by AMD employees themselves. Some users follow these patches very closely, which oftentimes reveal what the new generation of GPUs might bring to the table.

In this case, Wave Matrix Multiply-Accumulate was added to GFX11 architecture. This is the codename of upcoming RDNA3 consumer gaming GPUs. This instruction will, as the name suggests, operate on matrixes – rectangular arrays of tables containing numbers. This type of data is used heavily by AI/ML algorithms to multiply large sets of numbers.


Our latest RDNA™ 3 GPUs provide the ability to accelerate Generalized Matrix Multiplication (GEMM) operations. This means that you can now get hardware-accelerated matrix multiplications that take maximum advantage of our new RDNA 3 architecture. This new feature is called Wave Matrix Multiply Accumulate (WMMA).

I told you both Nvidia and AMD will use DLSS/FSR to push people to newer generations because of specific hardware support. An example is the RTX4000 series only supporting Frame Generation.
 
Last edited:
  • Like
Reactions: TNA
Other companies integrate similar functionality(the Tensor cores process certain workloads),so there is nothing to stop AMD doing the same.

Edit!!







I told you both Nvidia and AMD will use DLSS/FSR to push people to newer generations because of specific hardware support. An example is the RTX4000 series only supporting Frame Generation.

I didn't know that, looks like they have their own vertion, but if it doesn't match or better DLSS its pointless, FSR is not far off DLSS but the mere fact that DLSS is still better is the emphasis and that's what does for it.
 
Last edited:
Nothing short of tech tubers saying "they are both exactly the same" will do and that will be a high bar to set given their default is "DLSS is better" so they will scrutinize it hard with a view to make that conclusion as its the safe conclusion to make, so it literally needs to be impossible for them to find any fault at all in comparison.
 
Last edited:
The choice.

It's an easier choice if you know you are generally going to be playing at 1440p anyway (with 4K upscaling maybe being an option in some games).

DLSS Quality (display resolution 4K) works well on cards that can do native 1440p, but upscaling is still an option anyway through FSR2 or Radeon Super Resolution on AMD cards.

I am curious though, does DLSS Quality mode (or FSR2 Quality) always look better than 1440p native resolution?

EDIT - looks like Ultra Quality mode on FSR uses an internal resolution over 1440p, so maybe best to avoid that if performance is a problem.
 
Last edited:
RDNA2 maxed out at 80 CUs.
RDNA3 maxed out at 96 CUs.

On consoles, the Series X GPU has 52 Compute units:

Generally speaking, AMD has been improving the performance when the number of CUs is scaled up on RDNA3, but they kind of hit a wall at 96 CUs due to power constraints (already 355w for the RX 7950X).

It makes sense (assuming the scaling is decent), that a GPU with 2x the compute units of the Series X console GPU would provide very good 4K performance on both consoles and desktop graphics cards. cards, particularly because the RX 7900 XTX already performs well (94 FPS 1% lows at 4K in most games - according to Techspot's review: https://static.techspot.com/articles-info/2601/bench/4K-p.webp).

It seems likely that they will only be able to accomplish this with RDNA4, which will offer a die shrink to either a 3 or 4nm TSMC fabrication process, allowing AMD to reduce the power consumption a considerable amount. The main problem with RDNA3 is that on 5nm, it doesn't appear to scale all that much higher than RDNA2.

It's a good bet that 100 CU or greater GPUs will be a thing, even at the upper mid end (e.g. same tier as the 6800 XT), but I think it's much more likely to happen if they are able to use TSMC's future 3nm fabrication process.

It is true that the clock rate can be scaled up also on desktop GPUs (compared to the Series X GPU which is already running at 200w clocked at 1825 Mhz), but based on AMD analysis, it does increase power consumption more than might be considered desirable.
Eh, console games are way more optimised then pc ports, so you can't compare CU to CU.

Too many people here think the consoles perform the same as the actual silicon on pc and also think more people own GPUs above a 3060 then consoles.


We aren't far off from consoles actually being able to do this even without up scaling which is gonna make the pcmr look like clowns.
 
We aren't far off from consoles actually being able to do this even without up scaling which is gonna make the pcmr look like clowns.
I thought that lots of titles like Cyberpunk 2077 only ran at ~30 FPS (4K resolution) on consoles?

EDIT - According to Digital Foundry "For both consoles, ray-tracing mode is locked 2560x1440 at 30fps".

I suppose the answer is that it depends. The Witcher Next gen on Series X can do 4K + 60 FPs in performance mode (with dips to 50 FPS in some areas):

But, 30 FPS when ray tracing is enabled (with dips below 30). So, I imagine this is something that Sony and Microsoft would want to be improved for refreshed consoles.

In my opinion, RT is not worth the hit you take to performance on current generation consoles.
 
Last edited:
I thought that lots of titles like Cyberpunk 2077 only ran at ~30 FPS (4K resolution) on consoles?

EDIT - According to Digital Foundry "For both consoles, ray-tracing mode is locked 2560x1440 at 30fps".

I suppose the answer is that it depends. The Witcher Next gen on Series X can do 4K + 60 FPs in performance mode (with dips to 50 FPS in some areas):

But, 30 FPS when ray tracing is enabled (with dips below 30). So, I imagine this is something that Sony and Microsoft would want to be improved for refreshed consoles.

In my opinion, RT is not worth the hit you take to performance on current generation consoles.
Why use cyber punk as a benchmark?

Remember this is a game that got deleted from the psn store for awhile
 
What AMD needs (for gaming) is dedicated RT cores rather than the hybrid approach of RDNA 2 & 3. Otherwise they are already competitive in raster, 4K, light RT hybrid, cost-wise etc. and as for consoles they're more than competitive - they're the only game in town!
 
Back
Top Bottom