• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Associate
Joined
21 Apr 2007
Posts
2,485
I think in fairness to Nvidia they’ve figured out that for graphically intensive resolutions and features AI upscaling is a good way to keep the costs practical. The problem is that they are charging for it now as a premium feature as opposed it simply being part of the low-mid range offerings I.e. where its needed the most because continuously making bigger die for higher IPC and using more/faster ram starts to run into problems and you can’t carry on like that.

I definitely agree though that Nvidia are penny pinching in order to realise higher margins they could and should give consumers a lot more hardware for the prices they are charging. Pleased to see more people waking up to this even if they enjoy Nvidia products still too few reviewers that point this out.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
None of what you said actually makes sense.
It makes perfect sense. We're talking about a gaming product line here.
DLSS application goes far beyond RT...
But it doesn't on a gaming card. AI-interpolation of low-resolution images to boost frame rates? Or how about just boost the performance of the cores so the target resolution and frame rate is a native one? DLSS would be good for cloud-streaming services, but it has no place on a discrete gaming GPU.
None of the AI related applications is for purely gaming (or in game monster AI) - but just because games now don't require it doesn't mean it couldn't be used to encourage more sophisticated use of AI in games.
None of the AI related applications are for gaming full stop.
...depending on context compression techniques can achieve ratios of throughput and storage unrealistic via hardware useful for things like streaming from large resource sets, etc.
Fully agree, but that's not what Nvidia is doing with it on the gaming cards, at least this time around. "It can make a 10GB card operate like a 16GB card" is a common thing I hear when Tensor-accelerated compression is discussed in a gaming context. Or maybe just put 16GB on the card in the first place? AMD still get slammed for the 4GB HBM on Fury because many said "oh it'll feel like 8GB" and just didn't. Tensor-compressing 10GB to act like 16GB is identical ********.

So like I said, none of the Turing/Ampere "extras" have any place on a gaming product line. All of Turing's features were solutions looking to justify the obscene price jump, and now Nvidia have painted themselves into a corner with it.
 
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
Big Navi will be fast, faster than a 2080TI, about 40% faster.

But Ampere will be even faster. Nvidia know RDNA2 is good, very good. But Nvidia never lose... they will pull out all the stops and just go mahoosive to beat AMD.


I think he's right, when you look at the performance of the new XBox at 1.8Ghz with 52 CU's, its between a 2080 Super and a 2080TI, so we know the performance is there even on a relatively small cut down GPU and when you look at the PS5 clocking their 36 CU GPU to over 2.2Ghz we know the thing clocks.... there is a benchmark in the wild where an unknown AMD engendering sample crushed a very high clocked 2080TI by 30%.

But Nvidia will stop an nothing just to beat AMD and they will with Ampere.

It doesn't matter, the important thing is AMD will bring truly great GPU's back to the fight.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
It makes perfect sense. We're talking about a gaming product line here.

But it doesn't on a gaming card. AI-interpolation of low-resolution images to boost frame rates? Or how about just boost the performance of the cores so the target resolution and frame rate is a native one? DLSS would be good for cloud-streaming services, but it has no place on a discrete gaming GPU.

None of the AI related applications are for gaming full stop.

Fully agree, but that's not what Nvidia is doing with it on the gaming cards, at least this time around. "It can make a 10GB card operate like a 16GB card" is a common thing I hear when Tensor-accelerated compression is discussed in a gaming context. Or maybe just put 16GB on the card in the first place? AMD still get slammed for the 4GB HBM on Fury because many said "oh it'll feel like 8GB" and just didn't. Tensor-compressing 10GB to act like 16GB is identical ********.

So like I said, none of the Turing/Ampere "extras" have any place on a gaming product line. All of Turing's features were solutions looking to justify the obscene price jump, and now Nvidia have painted themselves into a corner with it.

None of what you are saying makes sense still - you are even confusing points you've made with seemingly thinking I made them? and the stuff you are accusing them of over compression is off what other people are claiming rather than what nVidia are doing - amongst other features the new compression systems will enable upto 4x improvement in bandwidth, increased L2 bandwidth and the ability to fit more data into L2 and other caches which can produce a much bigger performance boost.
 
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
Have you ever drove down a road and noticed a clearing where there are a line of trees indicating a forest behind it?
Those few trees don't actually hide the forest unless you concentrate solely on them. So don't become distracted.

RDN2 and RDN3 (rumored to be around DDR5 debut 2022) should provide some insight on AMD's direction. That Navi23 chip is of a peculiarity that I've not heard more about as of yet. But I digress.
I don't believe, so far, that RDN2 performance is the whole picture. What will also be of interest is the number of die and margins per wafer vs nvidia.



Die Size 251 mm² ........................Die Size 445 mm²

It takes nv to have a larger chip (2070) and specific game optimizations to actually be competitive. When they lost that competitive edge they went with a larger die size!


Die Size 545 mm²

The 2070 Super is what Nvidia plan was to counter the 5700 XT. Think about that for a minute. They simply throw money at it. This is, and has always been, their strategy...brute force.


However, that's not a winning strategy in the long term as it's very similar to Intel's market strategy (because they too dominate the market). Cheerleading "the cause" won't change nor improve that market strategy. Turing has shown us that nvidia reached equilibrium between cost and price and the consumer market isn't bearing it at all when compared to pascal.




Some were not aware of the similarities in Uarch between the 5700xt and the 2070. The biggest difference, pardon the pun, is the die size. So, all we needed to do is clock both at the same speeds and see the results.

But hold you horses there. Lets not forget that a 2070 is a much bigger chip. A whopping 194mm² that you paid a premium for. As you can see 5700xt is still more efficient even when averaging titles completely optimized for nvidia. So in order for nv to clearly beat AMD they had to use a chip that is 294mm² larger!!!!!!

Are the red flags flapping about yet? It should be. Now granted this isn't on the same node. But that's not the point I'm making. The point is what "you" bought in the past 2 years. You paid more for nvidia's old Uarch/node to get a competitive card. That's the correlation. Furthermore, rumors have it that Nvidia is using their Titan Amper's to compete against Navi 2x. We shell see.

Also, it also gives you a slight glimpse of prospective of what you can expect out of Navi 2x.

All good, just one thing, 5700... not 5700XT, the comparison they used was 5700 vs RTX 2070, they both have 2304 Shaders, the 5700XT has 2560 Shaders. :)
 
Last edited:
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
@LePhuronn

If Nvidia can't beat AMD on IPC and a reasonable die size they will throw all caution to the wind. if it takes a 900mm^2 7nm die to beat AMD's 500mm^2 7nm die that is exactly what they will do.

AMD are just not willing to so big they would only get 25 dies out of a $12,000 wafer...... they would never do that, Nvidia absolutely will just to win.
 
Associate
Joined
17 Sep 2018
Posts
1,431
Big Navi will be fast, faster than a 2080TI, about 40% faster.

But Ampere will be even faster. Nvidia know RDNA2 is good, very good. But Nvidia never lose... they will pull out all the stops and just go mahoosive to beat AMD.


I think he's right, when you look at the performance of the new XBox at 1.8Ghz with 52 CU's, its between a 2080 Super and a 2080TI, so we know the performance is there even on a relatively small cut down GPU and when you look at the PS5 clocking their 36 CU GPU to over 2.2Ghz we know the thing clocks.... there is a benchmark in the wild where an unknown AMD engendering sample crushed a very high clocked 2080TI by 30%.

But Nvidia will stop an nothing just to beat AMD and they will with Ampere.

It doesn't matter, the important thing is AMD will bring truly great GPU's back to the fight.

That's a very conservative 40% faster. This would make it a boderline draw based on this leak, which states a 3080ti is 40-50% and 70% in some titles.


@LePhuronn

If Nvidia can't beat AMD on IPC and a reasonable die size they will throw all caution to the wind. if it takes a 900mm^2 7nm die to beat AMD's 500mm^2 7nm die that is exactly what they will do.

AMD are just not willing to so big they would only get 25 dies out of a $12,000 wafer...... they would never do that, Nvidia absolutely will just to win.

Nvidia have minimum profit margins of 39% is it? They have a much higher minimum required profit margin than AMD. It's a promise to their shareholders
 
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
That's a very conservative 40% faster. This would make it a boderline draw based on this leak, which states a 3080ti is 40-50% and 70% in some titles.




Nvidia have minimum profit margins of 39% is it? They have a much higher minimum required profit margin than AMD. It's a promise to their shareholders

AMD's last quarter profit margin was 43%.

Gaming GPU's are not the only thing these people sell.
 
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
Also, don't get too hocked in with fantastical things Nvidia are going to pull out of their behinds because AMD might be competitive again.

The same sort of crap was doing the rounds just before AMD released it Zen Architecture, 3+ years later AMD are pulling away from Intel and people just keep repeating "any minute now Intel will pull something fantastical out of their arse" in the next 6 moths AMD are looking to be at <30% higher IPC than Intel, they are already at 13% higher.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
None of what you are saying makes sense still - you are even confusing points you've made with seemingly thinking I made them?
So I make a point, you counter the point, I counter your counter and now suddenly I don't make sense? One last time: RTX is Turing's enterprise and research technologies forced onto the gaming sector as nothing more than an excuse to raise prices astronomically and an excuse to nickel-and-dime the user with lacklustre specifications. Ampere is doubling-down on the fallacy because Nvidia have painted themselves into a corner.

If you disagree then fine but I fail to see how you not understand.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
So I make a point, you counter the point, I counter your counter and now suddenly I don't make sense? One last time: RTX is Turing's enterprise and research technologies forced onto the gaming sector as nothing more than an excuse to raise prices astronomically and an excuse to nickel-and-dime the user with lacklustre specifications. Ampere is doubling-down on the fallacy because Nvidia have painted themselves into a corner.

If you disagree then fine but I fail to see how you not understand.

As I said in the first reply most of your points don't make sense - the interpretation of the features you are using is incorrect. For instance:

"Tensor-powered DLSS would not be required if Nvidia didn't cheap out on the RT capabilities in the first place"

DLSS's usage and origins goes far beyond RT application and if anything its application to RT was almost more of a happy accident than design. Even with an abundance of hardware RT performance DLSS still has application to reproduce a higher quality image from a lower quality input without the performance impact. (As before I'm not a huge fan of DLSS before anyone mistakes me for championing/defending it here).

Same as the hyped up compression on the new architecture - it isn't just like AMD did trying to eek out more VRAM storage from a lower amount - it is intended for a much wider application to try and increase the hardware capabilities beyond what can easily be done by bulking up hardware i.e. theoretically increasing the bandwidth to/from caches by more than double which makes a huge difference to the performance impact related to residency.
 
Soldato
Joined
10 Oct 2012
Posts
4,421
Location
Denmark
My thoughts are that RDNA 2 will be as fast or faster than a 3080 ti in raw rasterized hardware capability however, I think AMD will still loose in a lot of cases due to things like Nvidia sponsored titles. I think Cyberpunk will most likely be a big win for Nvidia but I hope I'm wrong(unless Nvidia wins fair and square then I don't care). I think we will see a lot of weird reviews with Nvidia requiring DLSS to be active vs whatever GPU from the red team. Again, I hope I'm wrong.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
I think we will see a lot of weird reviews with Nvidia requiring DLSS to be active vs whatever GPU from the red team. Again, I hope I'm wrong.

I'm not a fan of this unless the image quality can be proven to 100% of the time be 1:1 and/or objectively superior not just subjectively better.
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
That's a very conservative 40% faster. This would make it a boderline draw based on this leak, which states a 3080ti is 40-50% and 70% in some titles.




Nvidia have minimum profit margins of 39% is it? They have a much higher minimum required profit margin than AMD. It's a promise to their shareholders


The video is all assumptions and contradictions and no the guy does not know everything about Ampere like he claims.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
DLSS's usage and origins goes far beyond RT application and if anything its application to RT was almost more of a happy accident than design.
And I said in my reply "AI-interpolation of low-resolution images to boost frame rates", so although I may have said RT specifically to begin with, I expanded as part of my counterpoint. I know what DLSS is, I know Tensor cores are used for denoising of RT, and I am saying that none of these things would be necessary if Nvidia didn't cheap out to start with and royally take the ****.

And I know that there are significant benefits to be had with compression technology, and the major uplift Tensor-accelerated compression can give to data access and throughput but none of these are applicable to a gaming card if Nvidia had bothered to give proper specs for the money they're charging to begin with.

I know there is a lot to be gained by what Turning and Ampere can do, but I am talking about gaming cards. I am not talking about broad scope implementations of Turing and Ampere's architecture.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,053
And I said in my reply "AI-interpolation of low-resolution images to boost frame rates", so although I may have said RT specifically to begin with, I expanded as part of my counterpoint. I know what DLSS is, I know Tensor cores are used for denoising of RT, and I am saying that none of these things would be necessary if Nvidia didn't cheap out to start with and royally take the ****.

And I know that there are significant benefits to be had with compression technology, and the major uplift Tensor-accelerated compression can give to data access and throughput but none of these are applicable to a gaming card if Nvidia had bothered to give proper specs for the money they're charging to begin with.

I know there is a lot to be gained by what Turning and Ampere can do, but I am talking about gaming cards. I am not talking about broad scope implementations of Turing and Ampere's architecture.

There is more application to gaming than you are implying and in some cases it would be prohibitively expensive either in cost or other factors like heat to bulk up the hardware to the same level as the benefits of these techniques.
 
Caporegime
Joined
17 Mar 2012
Posts
47,577
Location
ARC-L1, Stanton System
I can see the arguments already... "But But DLSS, you should be running it at 1080P and unscaling to 1440P because bar charts!!!!!"

DLSS requires the game developer to put it in game and its not as if AMD don't have their own compression technologies to boost performance, they do and no one is going to complain the bar charts are fake because X or Y reviewer didn't use them.
 
Soldato
Joined
15 Oct 2019
Posts
11,687
Location
Uk
That's a very conservative 40% faster. This would make it a boderline draw based on this leak, which states a 3080ti is 40-50% and 70% in some titles.

Probably means 40-70% more performance over Turing with Ray tracing enabled which was pretty poor on on the 2000 series so would explain Such a large jump.

Still would be better than the rubish we got served up with Turing which might aswell have been pascal re-released with ray tracing tacked on.
 
Last edited:
Caporegime
Joined
30 Jul 2013
Posts
28,887
Will AMD actually move away from their ridiculous old stock cooler with Big Navi?

I know board partners have good cooling, but it's so weird AMD are still using the same reference design from years ago.
 
Status
Not open for further replies.
Back
Top Bottom