• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Man of Honour
Joined
5 Dec 2003
Posts
21,004
Location
Just to the left of my PC
They werent able to overtake Nv nor Intel with 1.5 and 2 node advantage respectively and miraculously will then overtake at disadvantage?

No miraculously, no. By differences in design. The end section of that video is the part where he explains his reasoning, but I'll summarise here for anyone who doesn't want to watch it:

There are a few factors;

i) Different types of components on a GPU scale quite differently with changes in nodes. Scaling in terms of the size required on the die. I/O hardly scales at all now, memory (i.e. cache) scales moderately but certainly not well and the processing side of things continues to scale well.
ii) There's a practical limit to the maximum chip size with a given node.
iii) The rate of introduction of new nodes has slowed and will probably continue to do so.

So the largest potential gain with a chiplet design isn't in production costs. It's the amount of processing hardware that can be fitted on the largest chip that can be manufactured on any given node. There is a reduction in production costs, but it's not massive in comparison with retail costs. At the top end, it's barely relevant. What's relevant is that AMD's chiplet design makes it possible to make the main chip almost all processing hardware by moving a lot of the rest to other chips. It would be possible for AMD to bring out a GPU with double the amount of processing hardware than would be possible with a monolithic design, i.e. double what nvidia could do. They haven't. But they could. They could make a 7999XXXTXXX card with twice as much processing hardware as a 4090. More if they went closer to the effective maximum chip size on the current node. What they've done instead is put about as much processing hardware on a chip about half the size.

As I've said before, I expect AMD to bring out only halo cards that are about on a par with the 4090 and 4080 and slightly cheaper. AMD's chiplet design probably saves them ~$50 per card (EDIT: For a card using the top end GPU in the range). Undercut nvidia by $50 and make the same vast profit margin on each card. Or undercut them by $100 and make a very slightly lower profit margin. $50 production costs isn't much to care about on a $2000+ graphics card. I don't think AMD wants to change the current situation in the graphics card market. Their focus is on the datacentre.
 
Last edited:
Caporegime
Joined
8 Sep 2005
Posts
27,425
Location
Utopia

If correct we're in for a treat :cool:
The last part is the most exciting "news" in GPUs for a long time.
So rasterization performance is looking somewhere between the 4080 and the 4090, with Ray Tracing performance way better than RDNA2 but still below Nvidia. If the XTX comes in at a lower pricepoint than the 4090, and the XT lower than the 4080, they can easily make a dent in Nvidia sales. Add to that lower power requirements and no crappy adapter to fail.
 
Associate
Joined
10 Jan 2013
Posts
235
Location
London
Watching that got me really hyped now ... Not long to find out now :)
I recall him making a lot of people hyped for polaris aka rx 480 and they were expecting top tier performance for mid tier pricing.
The 480 was an excellent card at what is was designed for but since then I take his hype claims with a few pinches of maldon :D
 
Soldato
Joined
15 Oct 2019
Posts
11,770
Location
Uk
As I've said before, I expect AMD to bring out only halo cards that are about on a par with the 4090 and 4080 and slightly cheaper. AMD's chiplet design probably saves them ~$50 per card (EDIT: For a card using the top end GPU in the range). Undercut nvidia by $50 and make the same vast profit margin on each card. Or undercut them by $100 and make a very slightly lower profit margin. $50 production costs isn't much to care about on a $2000+ graphics card. I don't think AMD wants to change the current situation in the graphics card market. Their focus is on the datacentre.

These need to be considerably cheaper than Nvidia's crazy pricing though not just £50-100 discount on what especially in the case of the 4080 is about £600 overpriced else AMD will lose further marketshare.
 
Soldato
Joined
12 May 2014
Posts
5,239
As I've said before, I expect AMD to bring out only halo cards that are about on a par with the 4090 and 4080 and slightly cheaper. AMD's chiplet design probably saves them ~$50 per card (EDIT: For a card using the top end GPU in the range).
His analysis was specifically on the die itself. With Nvidia needing a beefier cooler to manage the heat output of ada, AMD save more per GPU breeze block than what he quoted.
 
Associate
Joined
10 Jan 2022
Posts
1,029
Location
London
No miraculously, no. By differences in design. The end section of that video is the part where he explains his reasoning, but I'll summarise here for anyone who doesn't want to watch it:

There are a few factors;

i) Different types of components on a GPU scale quite differently with changes in nodes. Scaling in terms of the size required on the die. I/O hardly scales at all now, memory (i.e. cache) scales moderately but certainly not well and the processing side of things continues to scale well.
ii) There's a practical limit to the maximum chip size with a given node.
iii) The rate of introduction of new nodes has slowed and will probably continue to do so.

So the largest potential gain with a chiplet design isn't in production costs. It's the amount of processing hardware that can be fitted on the largest chip that can be manufactured on any given node. There is a reduction in production costs, but it's not massive in comparison with retail costs. At the top end, it's barely relevant. What's relevant is that AMD's chiplet design makes it possible to make the main chip almost all processing hardware by moving a lot of the rest to other chips. It would be possible for AMD to bring out a GPU with double the amount of processing hardware than would be possible with a monolithic design, i.e. double what nvidia could do. They haven't. But they could. They could make a 7999XXXTXXX card with twice as much processing hardware as a 4090. More if they went closer to the effective maximum chip size on the current node. What they've done instead is put about as much processing hardware on a chip about half the size.

As I've said before, I expect AMD to bring out only halo cards that are about on a par with the 4090 and 4080 and slightly cheaper. AMD's chiplet design probably saves them ~$50 per card (EDIT: For a card using the top end GPU in the range). Undercut nvidia by $50 and make the same vast profit margin on each card. Or undercut them by $100 and make a very slightly lower profit margin. $50 production costs isn't much to care about on a $2000+ graphics card. I don't think AMD wants to change the current situation in the graphics card market. Their focus is on the datacentre.
Watched whole video. Its just loads of wishful thinking. NVIDIA is not Intel - they dont stagnate. There is a reason why they have clear lead for 10 years. He is sure that AMD will be faster next gen - based on what? He says there is no evidence Nvidia has working MCM, while they were experimenting with MCM BEFORE AMD.
If its the route to go next gen, we will see MCM from Nvidia too. Throughout the years they've gathered most of the engineer talent - simply by paying more than competitors. There is no reason to be SURE(like adored) that they will lose.
Chiplet design is far from being an answer to all problems. AMD currently has a 2 node lead over monolithic Intel and is still barely faster in Multithread and much slower in singlethread.
See what will happen when Intel will move to superior node than them.
 
Last edited:
Soldato
Joined
18 Oct 2002
Posts
6,426
Location
Newcastle, England
It's good to see 4090 cards not selling out instantly now. Lot of retailers including ocuk have them sitting in stock now. So anything closer to 2 grand is not really attractive to the casual PC gamer. Last time out I paid 900 quid for a 6800XT and then it was tough to take. Now I was thinking of go founders 4090 such is mental, but I'm glad I haven't as waiting on rdna3. Hopefully more attractive prices.
 
Associate
Joined
19 Sep 2022
Posts
512
Location
Pyongyang
No miraculously, no. By differences in design. The end section of that video is the part where he explains his reasoning, but I'll summarise here for anyone who doesn't want to watch it:

There are a few factors;

i) Different types of components on a GPU scale quite differently with changes in nodes. Scaling in terms of the size required on the die. I/O hardly scales at all now, memory (i.e. cache) scales moderately but certainly not well and the processing side of things continues to scale well.
ii) There's a practical limit to the maximum chip size with a given node.
iii) The rate of introduction of new nodes has slowed and will probably continue to do so.

So the largest potential gain with a chiplet design isn't in production costs. It's the amount of processing hardware that can be fitted on the largest chip that can be manufactured on any given node. There is a reduction in production costs, but it's not massive in comparison with retail costs. At the top end, it's barely relevant. What's relevant is that AMD's chiplet design makes it possible to make the main chip almost all processing hardware by moving a lot of the rest to other chips. It would be possible for AMD to bring out a GPU with double the amount of processing hardware than would be possible with a monolithic design, i.e. double what nvidia could do. They haven't. But they could. They could make a 7999XXXTXXX card with twice as much processing hardware as a 4090. More if they went closer to the effective maximum chip size on the current node. What they've done instead is put about as much processing hardware on a chip about half the size.

As I've said before, I expect AMD to bring out only halo cards that are about on a par with the 4090 and 4080 and slightly cheaper. AMD's chiplet design probably saves them ~$50 per card (EDIT: For a card using the top end GPU in the range). Undercut nvidia by $50 and make the same vast profit margin on each card. Or undercut them by $100 and make a very slightly lower profit margin. $50 production costs isn't much to care about on a $2000+ graphics card. I don't think AMD wants to change the current situation in the graphics card market. Their focus is on the datacentre.
There seems to be a power constraint with amd's design. They could have gone 2x but wouldn't have been able to do that with a reasonable power limit.

My guess is amd's superiority over Intel cannot be replicated in GPUs at full featureset parity. If Intel did their processors on TSMC 5n, we would have had a different story. Intel after getting their fabs in order would be an unstoppable force.

Also it would be rational to assume that signals resolved inside a chip will be more energy efficient compared to a mcm design
 
Last edited:
Soldato
OP
Joined
6 Feb 2019
Posts
17,676
The latest MLID video is out with new info.

Confirms that rdna3 is launching as an efficiency and cost focused architecture not max performance. Price will definitely be lower than Nvidia.

* 7700XT is not faster than the 6950xt

* Reference 7900XTX is 350w TDP and 2x8 pin, but AIB models are 450w TDP and 3x8 pin and many will be using the same 3/4 slot coolers used on the RTX4090

* 7900XTX does not match RTX4090 in rasterisation or ray tracing. However with the extra 100w TDP on AIB models they might get some really nice performance gains with overclocking so it's up in the air to how fast it can get.

* 7900XT and XTX will be in good supply, AMD is confident in its decision to front load supply to premium models after seeing how well the RTX4090 has sold.
 
Last edited:
Soldato
Joined
6 Aug 2009
Posts
7,073
There seems to be a power constraint with amd's design. They could have gone 2x but wouldn't have been able to do that with a reasonable power limit.

My guess is amd's superiority over Intel cannot be replicated in GPUs at full featureset parity. If Intel did their processors on TSMC 5n, we would have had a different story. Intel getting their fabs in order would be an unstoppable force
I'm not seeing that but I do see a limit on what is a reasonable power draw that customers will want. I think that's the real limit.
 
Associate
Joined
19 Sep 2022
Posts
512
Location
Pyongyang
I'm not seeing that but I do see a limit on what is a reasonable power draw that customers will want. I think that's the real limit.

There's probably a theoretical or economical limit after which consumers would prefer to buy 2 cards instead of one big chip. If at all such designs can be realised given how dx12 supports multigpu. That might also require some kind of incentive structure for dev adoption
 
Soldato
Joined
21 Jul 2005
Posts
20,107
Location
Officially least sunny location -Ronskistats
They are being patient, tested and thought stacking is only worth it in some cases. They would be wise in a recession period and apparently expensive metals/parts to focus on cost this gen to provide a solid amount of cards. Let's see what they do now its only a day away but one thing they have witnessed in the past month is to take the **** like nvidia and keep rising prices with a joke of a mid-high tier in both performance and price.
 
Soldato
Joined
12 May 2014
Posts
5,239
Watched whole video.
He is sure that AMD will be faster next gen - based on what?
I don't think you watched the video if you are asking that question.

He says there is no evidence Nvidia has working MCM,
He said there is no evidence of Nvidia having a chiplet strategy in the works for years. Do you have leak showing this not to be true?

If its the route to go next gen, we will see MCM from Nvidia too. Throughout the years they've gathered most of the engineer talent - simply by paying more than competitors.
AMD has far more experience than Nvidia in MCM.
 
Soldato
Joined
21 Jul 2005
Posts
20,107
Location
Officially least sunny location -Ronskistats
These need to be considerably cheaper than Nvidia's crazy pricing though not just £50-100 discount on what especially in the case of the 4080 is about £600 overpriced else AMD will lose further marketshare.

I am hoping this equates to £300 in reality after the fomo waves pile in. Then over time we can see the price drops like we have on the 6900 which have been much better (currently at below £700 which is where we want the 7900 to get to).
 
Caporegime
Joined
18 Oct 2002
Posts
29,992
The latest MLID video is out with new info.

Confirms that rdna3 is launching as an efficiency and cost focused architecture not max performance. Price will definitely be lower than Nvidia.

* 7700XT is not faster than the 6950xt

* Reference 7900XTX is 350w TDP and 2x8 pin, but AIB models are 450w TDP and 3x8 pin and many will be using the same 3/4 slot coolers used on the RTX4090

* 7900XTX does not match RTX4090 in rasterisation or ray tracing. However with the extra 100w TDP on AIB models they might get some really nice performance gains with overclocking so it's up in the air to how fast it can get.

* 7900XT and XTX will be in good supply, AMD is confident in its decision to front load supply to premium models after seeing how well the RTX4090 has sold.
You missed the part where he's not expecting AMD to come in 'cheap' with these cards, reiterating that they no longer the budget option. Looks like a hundred or 2 below the Nvidia equivalents whilst being slightly weaker.
 
Status
Not open for further replies.
Back
Top Bottom