• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Soldato
Joined
26 Sep 2010
Posts
7,157
Location
Stoke-on-Trent
@melmac
So you're suggesting that AMD actively chose to have 2 cards exist as their sole offering for an entire year? All that R&D money poured into RDNA 1 just for 2 measly products, and then chop it down to produce almost meaningless cards a year later? Ludicrous. I'm sorry, that's more illogical than suggesting RDNA 1 was planned for a full stack launch which was postponed/cancelled because of significant engineering issues.

And if you're so insistent on saying "there was no big card on RDNA 1 planned" then show me your proof, as you're so keen on me presenting support for something I have always claimed was a theory.
 
Soldato
Joined
26 Sep 2010
Posts
7,157
Location
Stoke-on-Trent
You want AMD to produce a GPU that's 80% faster than the 2080Ti? You really think that's possible?
Why do you think it's not possible?

We can debate until we're blue in the face whether RDNA 1 was ever going to exist beyond 40 CUs, but so much rumour and information about RDNA 2 points to an 80 CU model. 2080 Ti is, what, 30% stronger than a 5700 XT with only 40 CUs? Double the CUs with RDNA 2's improvements and maybe slap some HBM on it and the 2080 Ti would get crushed, possibly even to the tune of 80%. SO yes, it is possible, and if Ampere turns out to be as good as leaks suggest, AMD are going to need this level of jump to hang at the very top end.
 
Associate
Joined
16 Jan 2010
Posts
1,415
Location
Earth
This is the status quo and you only have to view these forums to see its not just the masses that think this way. Brand loyalty is a thing, just because some tarnish stuck years ago generally means the sheeple retort it as a tool of justification.

You can see it in parents/friends/family with cars, if someone has a bad experience in a model its stigmatised forever more with that fable. There is no point in thinking that this demographic will be swayed - you just have to let them have a bad experience with the brand they are championing and it will reset.
Yep, they have to learn (if they still have the ability to learn) for themselves..
 
Soldato
Joined
26 Oct 2013
Posts
4,012
Location
Scotland
It has been stated that RDNA2 is around 50% better per watt than RDNA1 so that would mean a 5700XT replacement would be around the 140W region even on the same process.

If that is the case then a 300W part could potentially double the performance even with nothing but a linear increase in shaders. Add faster memory and potentially a better process node and I don't see why a large jump over a 2080Ti wouldn't be possible. Not going to assume and hype it up because loads of things could go wrong but it's conceivable.
 
Soldato
Joined
6 Feb 2019
Posts
17,589
It has been stated that RDNA2 is around 50% better per watt than RDNA1 so that would mean a 5700XT replacement would be around the 140W region even on the same process.

If that is the case then a 300W part could potentially double the performance even with nothing but a linear increase in shaders. Add faster memory and potentially a better process node and I don't see why a large jump over a 2080Ti wouldn't be possible. Not going to assume and hype it up because loads of things could go wrong but it's conceivable.

unfortunately performance does not scale linearly with clocks or shaders. You can't just assume a 300w gpu is twice as fast as a 150w gpu
 
Soldato
Joined
10 Oct 2012
Posts
4,421
Location
Denmark
This is the status quo and you only have to view these forums to see its not just the masses that think this way. Brand loyalty is a thing, just because some tarnish stuck years ago generally means the sheeple retort it as a tool of justification.

You can see it in parents/friends/family with cars, if someone has a bad experience in a model its stigmatised forever more with that fable. There is no point in thinking that this demographic will be swayed - you just have to let them have a bad experience with the brand they are championing and it will reset.

Yeah, I experienced this when I bought my current car. My father-in-law at the time thought he knew everything because he had been driving for the last 32 years. According to him, I didn't have to do any research at all, just buy what he would tell me to buy. I ended up with a car he hated :p, not because he hated it( that was just a solid bonus) but because the pros outweighed the cons after a lot of research. Silly silly man.
 
Caporegime
Joined
13 May 2003
Posts
33,962
Location
Warwickshire
Yeah, I experienced this when I bought my current car. My father-in-law at the time thought he knew everything because he had been driving for the last 32 years. According to him, I didn't have to do any research at all, just buy what he would tell me to buy. I ended up with a car he hated :p, not because he hated it( that was just a solid bonus) but because the pros outweighed the cons after a lot of research. Silly silly man.
Sounds like you have an awful lot of admiration and respect for your father in law :p.

It must be fun navigating that with your other half :D.
 
Soldato
Joined
6 Aug 2009
Posts
7,071
Sounds like you have an awful lot of admiration and respect for your father in law :p.

It must be fun navigating that with your other half :D.

Sounds like it's his ex father-in-law, might not be an issue now ;) Glad I've got decent in-laws although they live about 8,000 miles away so we don't see them that often. Always the way, if they were a pain they'd be living nextdoor :p
 
Soldato
Joined
10 Oct 2012
Posts
4,421
Location
Denmark
Sounds like you have an awful lot of admiration and respect for your father in law :p.

It must be fun navigating that with your other half :D.

It's not her biological father and she hates his guts so it's rather easy :p.

Sounds like it's his ex father-in-law, might not be an issue now ;) Glad I've got decent in-laws although they live about 8,000 miles away so we don't see them that often. Always the way, if they were a pain they'd be living nextdoor :p
Bingo. Funny how that almost always seems to the way with in-laws.
 
Soldato
Joined
14 Aug 2009
Posts
2,780
Yeah.... even when AIB's made them good Nvidia still sold more.

i0c70Hz.gif.png

Damage was already done, more so as many were going full throttle of about power consumption and heat, feeling like the AMD card was something likea 1kw card. To be honest,there isn't much difference, if any, between my current rtx2080 and the R290 I had previously (subjectively measure, as in how much heat I fell it puts out, like any other Average Joe would experience it). Even the old Fermi GTX470 got really hot only in stupid torture tests. That's the reason why AMD has to be consistent in what they're doing. Small, carefully placed, always forward steps. Going all over the place (good product - bad cooling/presentation *** decent product- over hype *** good product - price too high), will take them no where. Their success with Ryzen was simply because Intel offered so little over the years and they could offer more performance in certain areas where Intel lackd. Conversely, in the fight against nVIDIA, they would need to have CrossFire working by default with every game. And that's not happening. Nor offering some extra technology - that does nVIDIA through RTX/DLSS. :)

Why do you think it's not possible? .

Drivers. According to them, the last part of GCN was taken out with RDNA 2. Let's not forget drivers weren't the best with HD7xxx series (first GCN cards), with a gains of 15-20% in some games about... a year later? By the time they would fix the drivers and get the performance where it should be, would be too late - damage done, people would move on to the next ones.
 
Soldato
Joined
19 Dec 2010
Posts
12,031
@melmac
So you're suggesting that AMD actively chose to have 2 cards exist as their sole offering for an entire year? All that R&D money poured into RDNA 1 just for 2 measly products, and then chop it down to produce almost meaningless cards a year later? Ludicrous. I'm sorry, that's more illogical than suggesting RDNA 1 was planned for a full stack launch which was postponed/cancelled because of significant engineering issues.

And if you're so insistent on saying "there was no big card on RDNA 1 planned" then show me your proof, as you're so keen on me presenting support for something I have always claimed was a theory.

Really man?

Ok, where to start? RDNA 1 isn't just 2 cards. I even listed 4 of them in my previous post. There are 4 mobile parts and 7 desktop parts, albeit one of those parts is a limited edition, so will say 6 desktop parts. Why is it wasted R&D money? RDNA 2 isn't a completely new architecture, it's just an evolution of RDNA 1 on a more refined 7nm process.

The 5700 cards won't be meaningless when the RDNA 2 cards come along. Just Like Polaris cards weren't meaningless when Vega came along. My guess is that the first generation Navi cards will drop down one tier, the 5700 cards will drop down to where the 5600 cards are now etc. etc. The New RDNA 2 cards will cover all the Tiers above that.

Why is it ludicrous? AMD did the same with Polaris. Released the Polaris cards first then over year later release the higher end cards with Vega.

You are basing your whole theory on one thing, that AMD did a respin of Navi back in September 2018. You don't understand what a respin is. A respin means that the products have already been taped out. If there were significant engineering issues, Navi cards would have never reached the Tapeout stage. A respin doesn't fix serious problems.

A theory has to have some basis in reality. There is no evidence of AMD cancelling one card never mind cancelling several. That's why the burden of proof is on you, it's your theory.

My theory, is that AMD planned it this way. 7nm is expensive. So rather than waste a ton of money on developing high end cards with little return they focused on where the money is for their first generation RDNA cards.
 
Caporegime
Joined
17 Mar 2012
Posts
47,635
Location
ARC-L1, Stanton System
You want AMD to produce a GPU that's 80% faster than the 2080Ti? You really think that's possible?

Also, your die size calculations are way off. The 2080Ti is 754mm2 not 800m2 for start. The 2070Super is 545mm2. Going from a 12nm to a 7nm process means a roughly a 40% reduction in the amount of space needed for each transistor. If you work it out, that means the 2080Ti would be actually only 454mm2 on 7nm. The 2070 Super would be 327mm2

But lets go a little further. A guy called Fritzchens Fritz was able to work out the space used by RTX on the die. It goes something like this.

2080Ti - 754mm2 with RTX, 684 without.
2070 Super - 545mm2 with RTX, 498 without.

That would mean the 2080Ti on 7nm without RTX cores would be only 410mm2.
The 2070 Super would be 299mm2.

So, in raster performance, we have a GPU from AMD on 251mm2 not quite as fast as a 299mm2 GPU from Nvidia.

Since they both use roughly the same amount of power currently. If both were on the same 7nm process, then Nvidia GPU would be using less power. .

Now factor in that the RDNA2 GPU in the Xbox series X is around 300mm2 and supposedly a little faster than the 2080. You can see that it's going to need a big jump in performance for AMD to be really competitive at the high end.

I would like to see AMD doing well with RDNA2 but I am not going to get hyped up just yet.

NO... that's not what i said, i said Big Navi might be 80% faster than the 5700XT putting it about 40% ahead of the 2080TI.

The 2080TI is 31 x 25, that's 775mm^2. 775 x 0.7 = 542. on 7nm 8% larger than Big Navi. do that the other way round to confirm 542 + 43% = 775.06.

And removing RTX from the equation is asinine, as if Nvidia are going to do that...
 
Last edited:
Soldato
Joined
19 Dec 2010
Posts
12,031
Why do you think it's not possible?

We can debate until we're blue in the face whether RDNA 1 was ever going to exist beyond 40 CUs, but so much rumour and information about RDNA 2 points to an 80 CU model. 2080 Ti is, what, 30% stronger than a 5700 XT with only 40 CUs? Double the CUs with RDNA 2's improvements and maybe slap some HBM on it and the 2080 Ti would get crushed, possibly even to the tune of 80%. SO yes, it is possible, and if Ampere turns out to be as good as leaks suggest, AMD are going to need this level of jump to hang at the very top end.

Because 80% faster than the 2080Ti is a massive jump in performance. This kind of jump in performance has only been seen a few times in the history of GPUs and only once without a die shrink. The two times I can remember in the last decade or so are Maxwell to Pascal and 7900GTX to 8800GTX.

Maxwell to Pascal was such a large jump in performance because it was a double die shrink. Remember both AMD and Nvidia had to release two generations of cards on 28nm because 20nm was scrapped. So Maxwell cards were 28nm but Pascal were 16nm. That meant an almost 80% performance improvement.

Then there was the 7900GTX to the 8800GTX. No die shrink here, but a massive increase in die size and the number of transistors. I think the 7900 GTX was something like 200mm2 and had 280million transistor while the 8800GTX was like 484mm2 with 681 million transistors.

I am happy to be corrected on this, maybe you can point out other performance jumps that big without a die shrink.

Just want to say something here. Do I think that Big Navi or whatever can be competitive with the 3080Ti? Yes, I really do. But, I don't think it will be 80% faster than the 2080Ti, because I don't the 3080Ti will be 80% faster than the 2080Ti. If the 3080Ti is some absolute monster that is 80% faster than the 2080Ti then I don't think Big Navi will be competitive at all.

Second, I also don't think that AMD will be competitive without Big Navi been, well, Big. Big means expensive. Lisa Su stated this several times, they aren't going to be the budget brand anymore. If Big Navi is competitive with the 3080Ti it's going to cost you.
 
Caporegime
Joined
17 Mar 2012
Posts
47,635
Location
ARC-L1, Stanton System
Because 80% faster than the 2080Ti is a massive jump in performance. This kind of jump in performance has only been seen a few times in the history of GPUs and only once without a die shrink. The two times I can remember in the last decade or so are Maxwell to Pascal and 7900GTX to 8800GTX.

Maxwell to Pascal was such a large jump in performance because it was a double die shrink. Remember both AMD and Nvidia had to release two generations of cards on 28nm because 20nm was scrapped. So Maxwell cards were 28nm but Pascal were 16nm. That meant an almost 80% performance improvement.

Then there was the 7900GTX to the 8800GTX. No die shrink here, but a massive increase in die size and the number of transistors. I think the 7900 GTX was something like 200mm2 and had 280million transistor while the 8800GTX was like 484mm2 with 681 million transistors.

I am happy to be corrected on this, maybe you can point out other performance jumps that big without a die shrink.

Just want to say something here. Do I think that Big Navi or whatever can be competitive with the 3080Ti? Yes, I really do. But, I don't think it will be 80% faster than the 2080Ti, because I don't the 3080Ti will be 80% faster than the 2080Ti. If the 3080Ti is some absolute monster that is 80% faster than the 2080Ti then I don't think Big Navi will be competitive at all.

Second, I also don't think that AMD will be competitive without Big Navi been, well, Big. Big means expensive. Lisa Su stated this several times, they aren't going to be the budget brand anymore. If Big Navi is competitive with the 3080Ti it's going to cost you.


Where the #### are you getting 80% from? on 7nm the 2080TI is still way over 500mm^2 large, to get 80% more performance out of it you would have to double the shader count, at least, Nvidia are known for making super large GPU's but good grief man how big do you think this thing is going to be? Not even Nvidia are going to make GPU's so huge they can only get 15 out of a 300mm $12,000 wafer...

Be realistic.
 
Soldato
Joined
26 Sep 2010
Posts
7,157
Location
Stoke-on-Trent
A theory has to have some basis in reality. There is no evidence of AMD cancelling one card never mind cancelling several. That's why the burden of proof is on you, it's your theory.
So where are the 56 and 64 CU models? Why was Vega allowed to (potentially) undermine 5700 series sales for so long? Why was Radeon 7 EOLed without a replacement, completely removing AMD's presence at the top end? Why did Navi 14 take so long to show up? What about the full SKU list that was leaked that just never materialised? Why the needless name change to 5700?

You seem to not understand the concept of a theory. If I had empirical evidence to support my theory it wouldn't be a damn theory, now would it. So as is the nature of discussion of rumour, I present arguments to support a theory of which I've never claimed to be fact.

My theory, is that AMD planned it this way. 7nm is expensive. So rather than waste a ton of money on developing high end cards with little return they focused on where the money is for their first generation RDNA cards.
So cite your sources and provide evidence to support your theory. You can't harp on at me for baseless speculation yet do exactly the same thing.

How would it be a waste of money developing a high-end card? AMD were perfectly content to lose money repurposing Instinct MI50 dies for Radeon 7 so they could claim the victory of the first 7nm gaming GPU, and offer a competitor to Nvidia's top-end for the first time in many years. A PR stunt? Yes. Boost mindshare? Of course. But suddenly there was no intention for Navi to punch at the high end if it could?
 
Last edited:
Soldato
Joined
19 Dec 2010
Posts
12,031
NO... that's not what i said, i said Big Navi might be 80% faster than the 5700XT putting it about 40% ahead of the 2080TI.

The 2080TI is 31 x 25, that's 775mm^2. 775 x 0.7 = 542. on 7nm 8% larger than Big Navi. do that the other way round to confirm 542 + 43% = 775.06.

And removing RTX from the equation is asinine, as if Nvidia are going to do that...

apologies, I misread your post. Big Navi been 80% faster than the 5700XT. Hmmm, yes it's an Improved Architecture but no die shrink, just a process improvement. A process that's geared more towards power saving than performance. I guess it's doable but it would take a massive chip from AMD over 500mm2.

Again, your calculations are not right. The 2080Ti is 754mm2 with 18.6 billion transistors. Your 0.7 is wrong too, because you skipped a node in your calculations, 10nm. So it would be 0.7 to 10nm and then 0.7 to 7nm. I think that's where you are getting your 0.7 from?

We don't know what size big Navi is, only rumours.

I wasn't suggesting that Nvidia are going to remove RTX from their chips. I was just comparing like with like for pure Raster performance. What size chip Nvidia will need to be faster than Big Navi won't need to be 1000mm2+ in size at all.
 
Status
Not open for further replies.
Back
Top Bottom