• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Surprisingly no, AMD has spent more $

nNfn63S.png



Where is that data from?

This site begs to differ
https://www.macrotrends.net/stocks/charts/AMD/amd/research-development-expenses

Nvidia
https://www.macrotrends.net/stocks/charts/NVDA/nvidia/research-development-expenses
 
This is pretty funny, when ampere is Turing levels of RT performance

In games that make heavier use of RT the uplift over Turing is considerable - it is when you have heavy use of older non-RT rendering in play and only a small number of effects using RT that the gains are less significant.
 
Why are ppl so obsessed with RDNA 2 being 80 CUs?

That probably wouldn't double the performance of say a GPU like the 40 CU 5700 XT. We know that because doubling the shader count alone did not double the performance of the RTX 3080 vs the RTX 2080 TI.

Instead, there is around a 1/3 increase in performance vs the RTX 2080 TI. That's because increasing the core count will only boost the TFlop count, not other areas like Texture rate and Pixel rate.

Assuming the Xbox Series X GPU die size is 170-200mm² (not the whole APU), we could see a RDNA 2 GPU with double the die size so around 400mm².

The question is, would increasing the die size of a Series X like GPU by 100% give a similar boost in performance? We know the Series X GPU is much more powerful than the Series S and the whole APU is about 89.4% larger (both have practically the same CPU).


Same guy who called ampere on samsung said biggest navi has 80cus. 80cus would be 5120 cores, so very similar to the 3090 in that respect. Personally id like them to go higher. MI100 is rumored to have 128
 
In games that make heavier use of RT the uplift over Turing is considerable - it is when you have heavy use of older non-RT rendering in play and only a small number of effects using RT that the gains are less significant.

which games?
 
which games?
Quake 2 which is fully path traced.

Ampere has lots more RT performance then Turing but the issue is the pipeline is shared with everything else so when mixed with traditional rendering techniques things get congested and the GPU can't fully utilise all the extra cores.
 
which games?

I'm buying in as I feel RT is usuable this gen, at least at 1440p.

https://www.eurogamer.net/articles/digitalfoundry-2020-nvidia-geforce-rtx-3080-review?page=6

Code:
Control: 1440p, DX12, High, High RT, TAA

   2080     35
   2080Ti   46
   3080     70

Metro Exodus: 1440p, DX12, Ultra, Ultra RT, TAA

   2080     51
   2080Ti   66
   3080     90

Battlefield 5: 1440p, DX12, Ultra, Ultra RT, TAA

   2080     70
   2080Ti   92
   3080    115

Quake 2: 1440p, RTX, Vulkan, Max Settings

   2080     31
   2080Ti   41
   3080     62


Battlefield 5 is an odd one as they drastically cut the amount of RT in game since launch.
 
The biggest issue with this discussion is we don't know what we don't know. I will bet that whatever the "leakers" are saying it's wrong and there will be a surprise or two.

As for RT, it's definitely the future but still looks a generation or two away for me. Not 5 mins ago the big arguements in here were over a few FPS difference when we were already well over 100 FPS. Now all of a sudden less than 60 FPS which was previously "unplayable" is now acceptable because "shiny".

The same goes for vendor specific technology. I'm not buying on the basis that something might be implemented if developers decide to and could be of variable success. That has pretty much failed every time, SLI, Crossfire, PhysX... currently ray tracing and DLSS seem very sparse in their support. This should improve over time hopefully.
 
The biggest issue with this discussion is we don't know what we don't know. I will bet that whatever the "leakers" are saying it's wrong and there will be a surprise or two.

As for RT, it's definitely the future but still looks a generation or two away for me. Not 5 mins ago the big arguements in here were over a few FPS difference when we were already well over 100 FPS. Now all of a sudden less than 60 FPS which was previously "unplayable" is now acceptable because "shiny".

The same goes for vendor specific technology. I'm not buying on the basis that something might be implemented if developers decide to and could be of variable success. That has pretty much failed every time, SLI, Crossfire, PhysX... currently ray tracing and DLSS seem very sparse in their support. This should improve over time hopefully.

Also not fussed with RT or DLSS style gimmicks. I just want solid 1440p+ performance. :) (Which at the moment the 3080 is king of)
 
That probably wouldn't double the performance of say a GPU like the 40 CU 5700 XT. We know that because doubling the shader count alone did not double the performance of the RTX 3080 vs the RTX 2080 TI.

Instead, there is around a 1/3 increase in performance vs the RTX 2080 TI. That's because increasing the core count will only boost the TFlop count, not other areas like Texture rate and Pixel rate.
.

So Ampere has shown that current games doesn't scale well after 5000 cores, wonder how that will affect big Navi

Different architects. Just because something doesn't scale well on Ampere doesn't mean the same will be true for Navi.

Watching Nerdtechgasms video on the Ampere architect. It seems that Nvidia is going the way of GCN with a more compute focussed card. That is why you will see a greater speed up on compute tasks with Ampere, than on gaming tasks.

It is interesting how paths cross. Nvidia going for a more compute focussed card and AMD going for a more gaming focussed card.
 
Different architects. Just because something doesn't scale well on Ampere doesn't mean the same will be true for Navi.

Watching Nerdtechgasms video on the Ampere architect. It seems that Nvidia is going the way of GCN with a more compute focussed card. That is why you will see a greater speed up on compute tasks with Ampere, than on gaming tasks.

It is interesting how paths cross. Nvidia going for a more compute focussed card and AMD going for a more gaming focussed card.

I saw this video where this guy compares his XC3 3080 to his 2080ti

He found that Horizon Zero Dawn doesn't work with Ampere so his 2080ti is 20% faster in that game, it seems to support the theory that games must be developed for that architecture

 
Status
Not open for further replies.
Back
Top Bottom