process shrink after a year or so I guess. A bit like the PS4 going from 28nm to 16nm in the PS4 Slim and Pro's.What's I find odd are rumors that those apus on console are getting revisions. Hmm, I wonder what that means? Halo parts?
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
process shrink after a year or so I guess. A bit like the PS4 going from 28nm to 16nm in the PS4 Slim and Pro's.What's I find odd are rumors that those apus on console are getting revisions. Hmm, I wonder what that means? Halo parts?
Why do you think it's not possible?You want AMD to produce a GPU that's 80% faster than the 2080Ti? You really think that's possible?
Yep, they have to learn (if they still have the ability to learn) for themselves..This is the status quo and you only have to view these forums to see its not just the masses that think this way. Brand loyalty is a thing, just because some tarnish stuck years ago generally means the sheeple retort it as a tool of justification.
You can see it in parents/friends/family with cars, if someone has a bad experience in a model its stigmatised forever more with that fable. There is no point in thinking that this demographic will be swayed - you just have to let them have a bad experience with the brand they are championing and it will reset.
Yeah I get that. However, this appears to be sooner rather then later. As if they stopped production at 7nm. I'm looking for more info though.process shrink after a year or so I guess. A bit like the PS4 going from 28nm to 16nm in the PS4 Slim and Pro's.
It has been stated that RDNA2 is around 50% better per watt than RDNA1 so that would mean a 5700XT replacement would be around the 140W region even on the same process.
If that is the case then a 300W part could potentially double the performance even with nothing but a linear increase in shaders. Add faster memory and potentially a better process node and I don't see why a large jump over a 2080Ti wouldn't be possible. Not going to assume and hype it up because loads of things could go wrong but it's conceivable.
unfortunately performance does not scale linearly with clocks or shaders. You can't just assume a 300w gpu is twice as fast as a 150w gpu
This is the status quo and you only have to view these forums to see its not just the masses that think this way. Brand loyalty is a thing, just because some tarnish stuck years ago generally means the sheeple retort it as a tool of justification.
You can see it in parents/friends/family with cars, if someone has a bad experience in a model its stigmatised forever more with that fable. There is no point in thinking that this demographic will be swayed - you just have to let them have a bad experience with the brand they are championing and it will reset.
Sounds like you have an awful lot of admiration and respect for your father in law .Yeah, I experienced this when I bought my current car. My father-in-law at the time thought he knew everything because he had been driving for the last 32 years. According to him, I didn't have to do any research at all, just buy what he would tell me to buy. I ended up with a car he hated , not because he hated it( that was just a solid bonus) but because the pros outweighed the cons after a lot of research. Silly silly man.
Sounds like you have an awful lot of admiration and respect for your father in law .
It must be fun navigating that with your other half .
Sounds like you have an awful lot of admiration and respect for your father in law .
It must be fun navigating that with your other half .
Bingo. Funny how that almost always seems to the way with in-laws.Sounds like it's his ex father-in-law, might not be an issue now Glad I've got decent in-laws although they live about 8,000 miles away so we don't see them that often. Always the way, if they were a pain they'd be living nextdoor
Why do you think it's not possible? .
FairDrivers.
@melmac
So you're suggesting that AMD actively chose to have 2 cards exist as their sole offering for an entire year? All that R&D money poured into RDNA 1 just for 2 measly products, and then chop it down to produce almost meaningless cards a year later? Ludicrous. I'm sorry, that's more illogical than suggesting RDNA 1 was planned for a full stack launch which was postponed/cancelled because of significant engineering issues.
And if you're so insistent on saying "there was no big card on RDNA 1 planned" then show me your proof, as you're so keen on me presenting support for something I have always claimed was a theory.
You want AMD to produce a GPU that's 80% faster than the 2080Ti? You really think that's possible?
Also, your die size calculations are way off. The 2080Ti is 754mm2 not 800m2 for start. The 2070Super is 545mm2. Going from a 12nm to a 7nm process means a roughly a 40% reduction in the amount of space needed for each transistor. If you work it out, that means the 2080Ti would be actually only 454mm2 on 7nm. The 2070 Super would be 327mm2
But lets go a little further. A guy called Fritzchens Fritz was able to work out the space used by RTX on the die. It goes something like this.
2080Ti - 754mm2 with RTX, 684 without.
2070 Super - 545mm2 with RTX, 498 without.
That would mean the 2080Ti on 7nm without RTX cores would be only 410mm2.
The 2070 Super would be 299mm2.
So, in raster performance, we have a GPU from AMD on 251mm2 not quite as fast as a 299mm2 GPU from Nvidia.
Since they both use roughly the same amount of power currently. If both were on the same 7nm process, then Nvidia GPU would be using less power. .
Now factor in that the RDNA2 GPU in the Xbox series X is around 300mm2 and supposedly a little faster than the 2080. You can see that it's going to need a big jump in performance for AMD to be really competitive at the high end.
I would like to see AMD doing well with RDNA2 but I am not going to get hyped up just yet.
Why do you think it's not possible?
We can debate until we're blue in the face whether RDNA 1 was ever going to exist beyond 40 CUs, but so much rumour and information about RDNA 2 points to an 80 CU model. 2080 Ti is, what, 30% stronger than a 5700 XT with only 40 CUs? Double the CUs with RDNA 2's improvements and maybe slap some HBM on it and the 2080 Ti would get crushed, possibly even to the tune of 80%. SO yes, it is possible, and if Ampere turns out to be as good as leaks suggest, AMD are going to need this level of jump to hang at the very top end.
Because 80% faster than the 2080Ti is a massive jump in performance. This kind of jump in performance has only been seen a few times in the history of GPUs and only once without a die shrink. The two times I can remember in the last decade or so are Maxwell to Pascal and 7900GTX to 8800GTX.
Maxwell to Pascal was such a large jump in performance because it was a double die shrink. Remember both AMD and Nvidia had to release two generations of cards on 28nm because 20nm was scrapped. So Maxwell cards were 28nm but Pascal were 16nm. That meant an almost 80% performance improvement.
Then there was the 7900GTX to the 8800GTX. No die shrink here, but a massive increase in die size and the number of transistors. I think the 7900 GTX was something like 200mm2 and had 280million transistor while the 8800GTX was like 484mm2 with 681 million transistors.
I am happy to be corrected on this, maybe you can point out other performance jumps that big without a die shrink.
Just want to say something here. Do I think that Big Navi or whatever can be competitive with the 3080Ti? Yes, I really do. But, I don't think it will be 80% faster than the 2080Ti, because I don't the 3080Ti will be 80% faster than the 2080Ti. If the 3080Ti is some absolute monster that is 80% faster than the 2080Ti then I don't think Big Navi will be competitive at all.
Second, I also don't think that AMD will be competitive without Big Navi been, well, Big. Big means expensive. Lisa Su stated this several times, they aren't going to be the budget brand anymore. If Big Navi is competitive with the 3080Ti it's going to cost you.
So where are the 56 and 64 CU models? Why was Vega allowed to (potentially) undermine 5700 series sales for so long? Why was Radeon 7 EOLed without a replacement, completely removing AMD's presence at the top end? Why did Navi 14 take so long to show up? What about the full SKU list that was leaked that just never materialised? Why the needless name change to 5700?A theory has to have some basis in reality. There is no evidence of AMD cancelling one card never mind cancelling several. That's why the burden of proof is on you, it's your theory.
So cite your sources and provide evidence to support your theory. You can't harp on at me for baseless speculation yet do exactly the same thing.My theory, is that AMD planned it this way. 7nm is expensive. So rather than waste a ton of money on developing high end cards with little return they focused on where the money is for their first generation RDNA cards.
NO... that's not what i said, i said Big Navi might be 80% faster than the 5700XT putting it about 40% ahead of the 2080TI.
The 2080TI is 31 x 25, that's 775mm^2. 775 x 0.7 = 542. on 7nm 8% larger than Big Navi. do that the other way round to confirm 542 + 43% = 775.06.
And removing RTX from the equation is asinine, as if Nvidia are going to do that...