• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
What that means is they're not going to allow dlss from a 2000 series turing to be on par with ampere. They would never sell ampere, lol
You can take this further to a more general point: why would Nvidia make a technology that can make X card perform as if it were a tier or 2 higher? DLSS is not a great feature to ramp up for frame rates as if you'd spent twice as much money, it's a technology allowing Nvidia to skimp on the silicon to begin with. Would you get the same performance if Nvidia replaced the Tensor cores with more stream processors?

And also don't forget that the implementation of DLSS requires silicon that is implemented entirely in reverse to requirements: it's the lower-tier cards that benefit from DLSS more, yet they don't have the Tensor cores to support it.

I fully agree with the sentiments above that it doesn't matter how the image is produced, as long as the image is good. To that end, the concept of AI-based image construction and upscaling is a good one, but right here and now it's ludicrous how much marketing gumph people are chowing down on, almost to the point of being apologists for it.
 
You can take this further to a more general point: why would Nvidia make a technology that can make X card perform as if it were a tier or 2 higher? DLSS is not a great feature to ramp up for frame rates as if you'd spent twice as much money, it's a technology allowing Nvidia to skimp on the silicon to begin with. Would you get the same performance if Nvidia replaced the Tensor cores with more stream processors?

And also don't forget that the implementation of DLSS requires silicon that is implemented entirely in reverse to requirements: it's the lower-tier cards that benefit from DLSS more, yet they don't have the Tensor cores to support it.

I fully agree with the sentiments above that it doesn't matter how the image is produced, as long as the image is good. To that end, the concept of AI-based image construction and upscaling is a good one, but right here and now it's ludicrous how much marketing gumph people are chowing down on, almost to the point of being apologists for it.
It truly is a double standard. And to that point the same will apply when consoles use Ray tracing. It doesn't matter how Ray tracing is used to produce the final image as long as the image looks good. But we know that the hypocrites will renege with microscopes in hand, lol.
 
Yep, have to agree I don't care a jot how the final image is created all I care about is the nearness of it's approximation to the reality I see with my own eyes IRL. Criticising how an image is made seems pretty facile and pointless. The threat posed by DLSS to AMD is real whether we want to acknowledge it or not and hopefully AMD are already on the case with a similar iteration of their own.

The problem is when its sold as "better than native",not we get 90% of the image quality of the best native implementation,and 30% extra FPS. It's the way that it is marketed which is the problem here.

It truly is a double standard. And to that point the same will apply when consoles use Ray tracing. It doesn't matter how Ray tracing is used to produce the final image as long as the image looks good. But we know that the Hypocrites will renege with microscopes in hand, lol.

They already did. When the PS4 PRO used Checkerboard rendering,PC fans were mocking consoles for needing upscaling. I even commented once,it would be suddenly fine when PC got it.

Lo and Behold,PC got it and upscaling is the bestest technology ever.

To that end, the concept of AI-based image construction and upscaling is a good one, but right here and now it's ludicrous how much marketing gumph people are chowing down on, almost to the point of being apologists for it.

It almost reminds me of whenever Apple releases anything - when its marketed as the greatest thing since sliced bread. Yet,games don't seem to be actually getting any better as a whole.

it's a technology allowing Nvidia to skimp on the silicon to begin with. Would you get the same performance if Nvidia replaced the Tensor cores with more stream processors?

And also don't forget that the implementation of DLSS requires silicon that is implemented entirely in reverse to requirements: it's the lower-tier cards that benefit from DLSS more, yet they don't have the Tensor cores to support it.
Exactly this - but also why they want to overhype it. This means people will accept lower IQ,and Nvidia(and AMD will jump onboard too),can sell smaller and smaller chips for more money.

ATI and Nvidia tried fiddling with IQ in hard to determine ways,10+ years ago,and review sites called them out for this,so now over 10 years later they are trying again.
 
Last edited:
I fully agree with the sentiments above that it doesn't matter how the image is produced, as long as the image is good. To that end, the concept of AI-based image construction and upscaling is a good one, but right here and now it's ludicrous how much marketing gumph people are chowing down on, almost to the point of being apologists for it.

+1. Its like if you say anything which isnt championing it, your identified as an imbecile.
 
Better than native is clearly nonsense and Nvidia simply can't be trusted to be honest or straightforward in anything they do. I can perceive a genuine benefit to the technology although whether it gains widespread traction/adoption remains to be seen. I certainly wouldn't believe any vague Nvidia promises about future adoption or implementation as they are incapable of anything approaching honest transparency..
 
I'm still inclined to wait for actual official figured before I believe anything about the upcoming releases from both sides.

My head is spinning from all the conflicting rumours that have been appearing on an almost daily basis.
You need to get your head together. There's potentially months of this :D
 
My main concern,is not whether AMD can break the 64CU "limit" is whether the scaling looks any good,and whether their drivers again won't become a bottleneck.

The scaling is something I'm concerned about as well.

I think 80 CUs is very credible and probably correct.

I am sceptical about the +50% performance per watt RDNA2 increase above RDNA that's mentioned on here a lot which is taken as twice (or more) the performance of the 5700XT.

I've also seen the potential performance increase refered to as "up to +50% performance per watt" and "aiming for +50% performance per watt" which could vastly differ from 50% at the end of the day.

Has AMD officially confirmed that they've achieved "+50% performance per watt"?

I only got one response :(

No I think its still "up to"

Thanks Th0nt :D
 
They already did. When the PS4 PRO used Checkerboard rendering,PC fans were mocking consoles for needing upscaling. I even commented once,it would be suddenly fine when PC got it.

Lo and Behold,PC got it and upscaling is the bestest technology ever.
What is a tragic comedy of this is that the games that they referencing/implying are all console games ported to PC. Oh the irony...
 
The scaling is something I'm concerned about as well.

We will need to see if the proof of the pudding is in the eating!
What is a tragic comedy of this is that the games that they referencing/implying are all console games ported to PC. Oh the irony...
The irony is not lost on me!

Yeah, like the bombshell Coreteks just dropped saying big navi only 15% faster than 2080ti, might match 3080 :mad:

https://coreteks.tech/articles/index.php/2020/07/29/big-navi-not-so-big/

Well,AMD is very conservative on die sizes,Nvidia isn't. Nvidia through sheer grunt was probably going to win it - just look at the GTX480. So what we need to see is how big the die is,and more importantly how the cards are priced.


Well at least Nvidia is launching before AMD,so they can at least revise the pricing and naming down a bit!! :P
 
This rumor is sponsored by "pinch of salt". Make sure you use your pinch today!!!

Current rumors is that RDNA 2 won't beat a 3090/3080ti. But it will beat a 3080. No real confirmation on this as of yet. And, it's not clear which sku was used to present this. It's assumed it's a 6900 xt but no solid info on id of the card.
Yup, already posted the same thing as the cortex article.
The problem I have with the cortex article is that he is claiming just 15% but competing against a 3080 which doesn't make sense. The 3080 is rumored to be 20% plus. If it's just 15% what he should have said was a competitor to a 3060/3070. So something's not adding up.

I have to question his source with the aib's. They've been known to give out false information to keep from being identified. And if I didn't know any better his source sounds like someone from MSI.

It might be true but with a lower-tier card. But in the end we will see once it's released.
 
Last edited:
Yeah, like the bombshell Coreteks just dropped saying big navi only 15% faster than 2080ti, might match 3080 :mad:

https://coreteks.tech/articles/index.php/2020/07/29/big-navi-not-so-big/

Navi2x?

more like Navi1.5x lol

Im surprised we're getting leaks at this stage. Nvidias Ampere cards are sampling and are in track to be in store for sale in September - Big Navi doesn't look like it will be in stores till November/December - so now it not a good time for AMD to be leaking disappointing performance numbers as that could cost them sales when their competitor has a full 2 months of unfiltered access to the market prior to their competitor launching and if people think Big Navi isn't worth it from the leaks they might not even wait for it to launch.
 
Last edited:
This wouldn't surprise me, AMD are the masters of snatching defeat from the jaws of victory

Victory?
Too many requirements for it - they should be super aggressive with the die size. It should scale linearly. It should deliver the promised 50% performance per-watt increase over RDNA 1, that means Navi 21 should be 100% larger, 50% more power efficient, and should clock very high.

I don't believe that Navi 21 with 505 sq. mm die size in its top configuration is only 15% faster than RTX 2080 Ti.

It's either engineering sample with much lower specs or lower clocks, or just not Navi 21 but another chip.
 
We need to firstly find out what die sizes are being used for the top chips from both companies?? Will it be like the Fury X and GTX980TI when both had similar die sizes,and the GTX980TI faster? Or is it going to be like the HD5870 and GTX480,when the latter had a much larger die size,and consumed far more power,to get the extra peformance??
 
Victory?
Too many requirements for it - they should be super aggressive with the die size. It should scale linearly. It should deliver the promised 50% performance per-watt increase over RDNA 1, that means Navi 21 should be 100% larger, 50% more power efficient, and should clock very high.

I don't believe that Navi 21 with 505 sq. mm die size in its top configuration is only 15% faster than RTX 2080 Ti.

It's either engineering sample with much lower specs or lower clocks, or just not Navi 21 but another chip.

well it's the second disappointing leak in a couple days

first mi100 is only 13% faster while clocking much higher and with more cores than Ampere and then 6900xt which has less cores than mi100 is only slightly faster than 2 year old Nvidia gaming gpu

when you get several leaks saying similar things it's time to worry
 
Status
Not open for further replies.
Back
Top Bottom