• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Thats about the weirdest reasoning you could come up with
Why? Because it makes sense? Because it's logical? Because it is an attempt to actually discuss matters empirically?

Tell me then, if an upcoming architecture is billed as being twice as performant as its predecessor (which is a known quantity), how would YOU attempt to determine its capabilities?
 
Why such denial about performance?
It's not badly unrealistic.

For its rape&robbery ludicrous price 2080 Ti is only 35-40% faster than 5700 XT.
And GPU of next Xbox has 30% more processing units than 5700 XT.
Add architectural overhaul and Scarlett can well be fully in same ball park in general performance.
Also Turing's RT performance isn't really stellar with notable to huge performance penalty depending on amount of RT load of the game.
Hence higher fps 4K RT being pipe dream in heavy games...

Now why should RDNA2's RT performance be automatically worser?
And if Nvidia can improve it like rumoured, why AMD couldn't do that?
If anything Turing's RT penalty should tell it's time to consider different implementations!


Do you really think a console which is less powerful than the 2080Ti will be faster than said card in Ray Tracing?

I also never said RDNA 2 would be worse? In fact, if Big Navi isn't faster than the 2080Ti in Ray Tracing then you have to ask yourself what have AMD been doing for the last 2 years.

Why do you think the Ray Tracing penalty for AMD's method will be less than Turing's method? AMD's solution is a hybrid approach, but will involve using GPU resources for Rasterization to be used for Ray Tracing. Whereas Nvidia has a dedicated Ray Tracing hardware.

Do you also forget that AMD's is already on 7nm, there is no node shrink this time around. You are comparing a 12nm card to a 7nm card. Nvidia has the advantage this time around as they have both a new architecture and a die shrink. That's why I think their Ray Tracing method will be ahead of AMD's.

The penalty isn't because of Nvidia's solution, the penalty is because Ray Tracing is a heavy computational workload. Didn't you see that console game developer who has already stated that in their game you are either going to have to decide between high frame rates or Ray Tracing.

I am not in denial about performance, I am just trying to be realistic in my expectations. If a console manages to be as fast as the 2080 Super in RT performance I will be impressed.

If the performance is way better and I am wrong then I will be delighted.
 
The console hardware should come out of the gates strong. It was what 2013? for the last ones, like you mention @melmac its just under two years ago that card came out. It will then be another year till the devs squeeze out the best of the consoles but even if they were the same as a 2080Ti in virtual spec, they should be clearing it in games due to the ringfenced environment they get to use.
 
Once developers really ring in on the performance of these new consoles it's going to be really hard to notice the difference in image quality between console and PC.

They would have to abandoned their magnifying glass for 100x microscopes and settle for RT differences. :D
 
Once developers really ring in on the performance of these new consoles it's going to be really hard to notice the difference in image quality between console and PC.

They would have to abandoned their magnifying glass for 100x microscopes and settle for RT differences. :D

Graphics isn't everything, frame rate for me is far more important than looks and PC will still be the platform of choice for this.

Next gen console will still in high demanding games be pushing 30fps with grater resolution and graphics settings than last gen.

Console gamers half them don't know any difference.

Been playing the last of us 2 and you can see 30fps from a mile away and am afraid next gen will just looks nicer graphics wise over last gen.

They is no chance I can see a high end game running at 4k 60fps with setting that match a pc version.

I wait to be proven wrong.
 
Graphics isn't everything, frame rate for me is far more important than looks and PC will still be the platform of choice for this.
I will save this here and remind you of this quote the next time you mention AMD's image quality is better than Nvidia's and is one of the reasons you prefer AMD cards :p:D;)
 
Graphics isn't everything, frame rate for me is far more important than looks and PC will still be the platform of choice for this.

Next gen console will still in high demanding games be pushing 30fps with grater resolution and graphics settings than last gen.

Console gamers half them don't know any difference.

Been playing the last of us 2 and you can see 30fps from a mile away and am afraid next gen will just looks nicer graphics wise over last gen.

They is no chance I can see a high end game running at 4k 60fps with setting that match a pc version.

I wait to be proven wrong.
The 64 page rtx/dlss thread would disagree with you. But, yes I do get it lets move the goalposts.:D

Anyone can use 1080p if it is a concern...even consoles :eek:. For the elusive 4k gaming its not much of an issue. As that's not normal on PCs yet.

I too believe that the IQ battle would have been lost by then. All that is left are the frames. May the best oc win amiright...Oh wait I almost forgot developers create these games targeting a certain fps, usually 60fps below 4k. So anything beyond that doesnt make the game any smoother. Yikes...:(
 
Last edited:
Isn't the sweet spot most are after

1440p
144 FPS+
Circa £600
and in my case 'Quiet!'

I couldn't give a monkeys how much energy it uses and it could be ugly as hell and I'd still love it
Mine is 4K, 80fps Circa £600 and hopefully quiet as I can barely hear myself think when my PC is on. Obviously the sound of gunfire drowning out the racket means it all makes sense. I would also like it not to cost a Netflix monthly subscription extra per week in power tbh but anything is better than 295x2 quadfire was!
 
I will save this here and remind you of this quote the next time you mention AMD's image quality is better than Nvidia's and is one of the reasons you prefer AMD cards :p:D;)

Ay? I have never said this, sure I have posted videos other people claiming this but I have always stood by that AMD Vs Nvidia is down to the default settings output of the drivers rather than an actual difference.
 
Ay? I have never said this, sure I have posted videos other people claiming this but I have always stood by that AMD Vs Nvidia is down to the default settings output of the drivers rather than an actual difference.
Fair enough, must have you confused with someone else then ;):D
 
In fact, if Big Navi isn't faster than the 2080Ti in Ray Tracing then you have to ask yourself what have AMD been doing for the last 2 years.

I hope they have been working on rasterization performance. Nvidia did next to nothing on that front at the various price points last time. They offered 1080Ti performance for 1080Ti money.
 
AMD's solution is a hybrid approach, but will involve using GPU resources for Rasterization to be used for Ray Tracing. Whereas Nvidia has a dedicated Ray Tracing hardware.

Do you also forget that AMD's is already on 7nm, there is no node shrink this time around. You are comparing a 12nm card to a 7nm card.
Except that "dedicated hardware" isn't doing such great performance with performance penalty from notable to half.
Pretty sure in many cases lots of those traditional job doing transistors are sitting on their fingers doing nothing usefull.
So why couldn't hybrid solution of only some extra hardware be overall more optimal for getting work out from as much of resources as possible?
(though of course we won't be seeing any high refresh rate 4K RT gaming even on PC side)

And you're forgetting that 2080 Ti's chip is tripple the size of Navi 10 and with toward twice the transistors.
No manufacturing node step can compensate for such lack of hardware resources in comparison.
With architecture probably also being interim solution developed with resources left over from Zen uarch and "console project".
Navi 10 was mostly developed at time when AMD was really tight on resources.
Designs are basically finalized like year before we see product in shops.
 
Except that "dedicated hardware" isn't doing such great performance with performance penalty from notable to half.
Pretty sure in many cases lots of those traditional job doing transistors are sitting on their fingers doing nothing usefull.
So why couldn't hybrid solution of only some extra hardware be overall more optimal for getting work out from as much of resources as possible?
(though of course we won't be seeing any high refresh rate 4K RT gaming even on PC side)

And you're forgetting that 2080 Ti's chip is tripple the size of Navi 10 and with toward twice the transistors.
No manufacturing node step can compensate for such lack of hardware resources in comparison.
With architecture probably also being interim solution developed with resources left over from Zen uarch and "console project".
Navi 10 was mostly developed at time when AMD was really tight on resources.
Designs are basically finalized like year before we see product in shops.
I remember after vega they talked about leap frog design teams and that navi 2x was started on way back then, so even 3x is being worked on and had been since vega and even more so since navi 1 came out
 
Once developers really ring in on the performance of these new consoles it's going to be really hard to notice the difference in image quality between console and PC.

They would have to abandoned their magnifying glass for 100x microscopes and settle for RT differences. :D

You're very optimistic, but at least even you agree that RT will be their "doom".
 
Except that "dedicated hardware" isn't doing such great performance with performance penalty from notable to half.
Pretty sure in many cases lots of those traditional job doing transistors are sitting on their fingers doing nothing usefull.
So why couldn't hybrid solution of only some extra hardware be overall more optimal for getting work out from as much of resources as possible?
(though of course we won't be seeing any high refresh rate 4K RT gaming even on PC side)
Just to add i think it would help with power consumption and heat management, to go with a hybrid solution. Idle transistors are still sipping power and generating heat. I'm not entirely sure how ramping down power works but it may not even be possible to ramp them down to idle power consumption if other cores are working, just a slightly lower state than normal.

From what I've seen with Ryzen if one core is working the other cores are taken out of sleep state even if they are just idle
 
The 64 page rtx/dlss thread would disagree with you. But, yes I do get it lets move the goalposts.:D

Anyone can use 1080p if it is a concern...even consoles :eek:. For the elusive 4k gaming its not much of an issue. As that's not normal on PCs yet.

I too believe that the IQ battle would have been lost by then. All that is left are the frames. May the best oc win amiright...Oh wait I almost forgot developers create these games targeting a certain fps, usually 60fps below 4k. So anything beyond that doesnt make the game any smoother. Yikes...

Not everyone is like Bethesda and not every engine is like the Creation Engine :P
 
Status
Not open for further replies.
Back
Top Bottom