• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Soldato
Joined
26 Sep 2010
Posts
7,157
Location
Stoke-on-Trent
That would make Navi X2 48% faster than a 2080TI
With only 40 CUs, I might add since you're using the 5700 XT as the reference point. And since other posts here have illustrated RDNA 1 kinda scaled linearly, 80 RDNA CUs is potentially eye watering. No wonder Nvidia are rumoured to be a bit worried.

Can you imagine a full on, balls out, e-peen halo card? 80 CUs, HBM2 (32GB for why not) and let the board eat 400W if it needs to. It's 2 grand because why not, but it's the performance crown competitor, not a product for the masses.

Ouch.
 
Caporegime
Joined
17 Mar 2012
Posts
47,668
Location
ARC-L1, Stanton System
With only 40 CUs, I might add since you're using the 5700 XT as the reference point. And since other posts here have illustrated RDNA 1 kinda scaled linearly, 80 RDNA CUs is potentially eye watering. No wonder Nvidia are rumoured to be a bit worried.

Can you imagine a full on, balls out, e-peen halo card? 80 CUs, HBM2 (32GB for why not) and let the board eat 400W if it needs to. It's 2 grand because why not, but it's the performance crown competitor, not a product for the masses.

Ouch.

80 CU's would make it near +60% over a 2080TI.

Ignoring 80 CU's a 72 CU part at +48% to some people might not seem like a lot, or even enough, that's what Nvidia also need to gain to match this rumoured Navi X2 and its getting to that point, which is not easy, the 2080TI has 4352 Shaders, add 50% to that you have 6528 Shaders, to beat this rumoured Navi X2 by as little as 10% Nvidia will need 7200 Shaders and that's assuming the whole thing scales 1:1 which shaders never do.

I've seen people throw "+80% over 2080TI" sugestions out there. No..... no ####'### way, you're looking at a 9000+ Shader GPU to do that.
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,588
Location
Greater London
I really hope AMD take the crown, would be surprising and refreshing. It has been a long time. Jensen will then start to bang on about, but they have DLSS which means they have much better performance.. And of course most will buy and parrot this... :p:D


80 CU's would make it near +60% over a 2080TI.

Ignoring 80 CU's a 72 CU part at +48% to some people might not seem like a lot, or even enough, that's what Nvidia also need to gain to match this rumoured Navi X2 and its getting to that point, which is not easy, the 2080TI has 4352 Shaders, add 50% to that you have 6528 Shaders, to beat this rumoured Navi X2 by as little as 10% Nvidia will need 7200 Shaders and that's assuming the whole thing scales 1:1 which shaders never do.

I've seen people throw "+80% over 2080TI" sugestions out there. No..... no ####'### way, you're looking at a 9000+ Shader GPU to do that.

Calm down humbug, every time you seem to get excited and involved we end up with a dud! :p:D
 
Caporegime
Joined
17 Mar 2012
Posts
47,668
Location
ARC-L1, Stanton System
I really hope AMD take the crown, would be surprising and refreshing. It has been a long time. Jensen will then start to bang on about, but they have DLSS which means they have much better performance.. And of course most will buy and parrot this... :p:D

AMD have their own DLSS and it doesn't require developers to enable it. ;)
 
Soldato
Joined
8 Jun 2018
Posts
2,827
AMD have their own DLSS and it doesn't require developers to enable it. ;)
Now don't you go talking bad about dlss. With its Artificial Intelligence being able to add pixels to assets that the developer forgot to detail. We simply need to ignore the fact that it has to be added per game.:p

If I simply just played the game without zooming in on images that are 30 miles down the road I wouldn't have known that Fidelity fx CAS just didn't do it. /s
:D
 
Caporegime
Joined
17 Mar 2012
Posts
47,668
Location
ARC-L1, Stanton System
Now don't you go talking bad about dlss. With its Artificial Intelligence being able to add pixels to assets that the developer forgot to detail. We simply need to ignore the fact that it has to be added per game.:p

If I simply just played the game without zooming in on images that are 30 miles down the road I wouldn't have known that Fidelity fx CAS just didn't do it. /s
:D

:D

Its so typical of Nvidia to come up with an expensive over the top solution to something simple. $$$$$$$$$$$
 
Soldato
Joined
14 Aug 2009
Posts
2,793
In current games the bottleneck is actually the other way around. The RT cores are waiting on the Rasterization HW.

Then why the performance drops like a rock when RT is ON if the RT hw is not too weak? If raster would be slower, then enabling GI RT, Shadows RT and reflections RT, the performance should go up, as all those would be calculated by the (alleged) faster RT cores. Game developers would use those for sure as it would give a much better performance.
 
Associate
Joined
26 Mar 2016
Posts
150
Then why the performance drops like a rock when RT is ON if the RT hw is not too weak? If raster would be slower, then enabling GI RT, Shadows RT and reflections RT, the performance should go up, as all those would be calculated by the (alleged) faster RT cores. Game developers would use those for sure as it would give a much better performance.

Raytracing is only one part of Raytracing (This sentence sounds strange, but it's really that way, i try to go on with the term "Tracing Rays"). Raytracing is split in three parts: 1. Tracing Rays, 2. Shading, 3. Denoising. With Turing Part1 is accelerated, but Raytracing is also extremely Shading intensive. It's actually the shading, which is the slow part and bottlenecking Turing the most. Shading is done using Cuda Cores and you can't accelerate it. That's why you can forget all dreams of magic 3 or 4x leaps in RT performance. Won't happen, cause it's not possible if your shading speed doesn't increase by 3, 4x too. Seperate RT cards, increasing RT cores x times etc, all makes no sense, because shading it the limit.

So what helps most? It's actually software. Engines at the moment are not optimized for RT, with consoles having hardware RT, this will change. Engines optimized for RT will help much more, than whatever amounts of RT cores.
 
Associate
Joined
13 Jul 2020
Posts
500
I'm not sure how fidelityFX equals DLSS. Can you guys explain it to me please? I'm serious, i'm not seeing it. Maybe you could point it out.

Yes it's 1080p, but hey. Also don't forget these are screenshots, in motion the shimmering is even worse. Be sure to include some snide reply making fun of "AI" as well :)

FidelityFX vs DLSS.

djfNV5l.jpg
nCmlCbw.jpg
 
Last edited:
Associate
Joined
13 Jul 2020
Posts
500
More like 4k 30 fps with raytracing enabled. Or 1080p 120 fps.

Why would they even bother with 8k? How many people even got 8k atm? Let's be real. Don't forget you also need 4 times more power to render 8k compared to 4k.
 
Status
Not open for further replies.
Back
Top Bottom