• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Just seen this post, not sure why I missed it yesterday.

I don't agree with your calculations. Well, mainly because you picked the top CPU and a midrange GPU. If you were comparing like with like, then you would be comparing with maybe the 5600x. Which one would make the most margin then?

But, think about it another way. Intel deal with mainly CPUs. Nvidia deal with mainly GPUs. Intels margins always hover around the mid 40s. Now, lets go back to Pascal and Kepler for Nvidia. Their margins back then were mid 50s. Remember Kepler, most of the GPUs were under £500.

That would lead me to believe that there are higher margins in GPUs.

And I would be very surprised if it's not the same with AMD. That overall GPUs have a higher margin than CPUs.

Also just want to comment on your post AMD getting out of the GPUs. LOL, not a hope. Even with the tiny market share they had, GPUs were bringing in money. Now, AMD are finally getting their act together and the gaming market is still huge.

But, it's not only the gaming market now, GPUs are bringing in more revenue as they are becoming a bigger and bigger part of datacentres.

A CCD on a $20,000 300mm Wafer costs $26
A 6700XT on a $20,000 300mm Wafer costs $78
A 6800/XT - 6900XT on a $20,000 300mm Wafer costs $192

A 5600X only has 1 CCD, its about 260mm^2, about 180mm^2 of that is a 12nm IO die from GloFo, it probably costs AMD less than the 7nm TSMC CCD at about 80mm^2, about $15, $10 for other materials, $10 for assembly, about $60 / $70 to get it to a sellable state. At $200 retail AMD will probably sell that for about $120.
Last i knew the price of 1GB GDDR5 Memory IC's was about $10 each if you buy them in large bulk, 2GB GDDR6 are probably a good deal more expensive than that, especially today, lets say they are $20 each, the 6700XT has 6 of those, that's $120 for the memory IC's alone, you're already up to $200 before you assemble the PCB, MosFets are $2 / $3 each, controller chips, there are a couple of those, they can be $10+, cooler, fans, Shroud, $30, probably about $300 for a fully assembled 6700XT.
Now you know why both Nvidia and AMD are saying its almost impossible to make a sub $200 GPU, and they really would rather not be doing it anymore, the cost of all those components adds up.

If you're selling GPU's for $600, $700, $800 sure you can turn a good profit on them, which is why AMD stick with it, trouble is Nvidia sell many many many more of those than AMD do.
 
If you're selling GPU's for $600, $700, $800 sure you can turn a good profit on them, which is why AMD stick with it, trouble is Nvidia sell many many many more of those than AMD do.

I didn't quote all your post because there is more to gross margin than just the BOM. Which we don't really know either.

I am quoting your last sentence, because I knew you would come back with that argument about Nvidia selling at $600 or more. That's why I picked the margins from Kepler where most of their GPUs apart from the 690 were under $500. Yet they still made margins in the mid 50s.

You haven't answered my main argument either. Why the biggest CPU manufacturer, who you say is and has always been fleecing us, has pretty consistent margins around the mid 40's. While the biggest GPU maker has margins around the mid 50's.

I think for both most of their sales are OEMs/prebuilt. It just that the lowest value pre-builts don't come with a dedicated GPU. I do agree with you about everyone "neglecting" their CPUs in favour of a more powerful GPU.


The numbers of CPUs sold to OEMs would completely overshadow any of the margins made by the more enthusiast line of CPUs. CPUs are high volume, low margin products. It's about Quantity. Whereas Discrete GPUs are not needed for the majority of PC users, so to be viable they have to sold at higher margins.

If you limited the discussion to enthusiast only CPUs, Ryzen 5 and over. Then I might agree with you both.

And I don't think it's the case of "neglecting" as you put it. Most of the time it's a budget thing. If you had a 3600x and £500 to spend for gaming, buying a £500 graphic card would still be a better option than buying a new CPU. But that's a whole other topic :)

Anyway, I would be very surprised if AMD were putting less margins on their GPUs.
 
@melmac
The economics of it now are much different to 2012.
In the Kepler era 28nm per wafer costs (at the time) were lower, compared to current nodes.
Plus Kepler dies where on average also much much smaller, especially those serving the sub $500 market you mention.

For comparison the sub $500 Kepler desktop segment (effectively the full initial GPU stack - come to Titan later):
GK107 - 118mm : 640 to 650
GK106 - 221mm : 650ti to 660
GK104 - 294mm : 660ti to 680

If you take the largest die at 294mm of the GK104 (680) and compare for equivalents in the 3000 series line-up, you are left with only:

GA106 - 276mm : 3050 and some 3060 models, nothing else.

That's only 2 out of 9 Ampere models, both of which are lower end of the stack Ampere GPU's.
The remaining 7 of 9 Ampere models are significantly larger die size.

GA104 - 392mm : 3060ti/3070
GA102 - 628mm 3080/3090ti

For some extra context, if we look at above $500 priced Kepler:
GK110 Titan was 561mm

The Titan was massively subsidized by a super computer contract, and was only released 1 year later to the GPU market than 640-680 kepler models.
In contrast the Ampere GA102 bohemoth was released straight out the gate - by now they knew they could push the die size and produce this, sell at an extravagant price range and that there are gamers who would pay it.
Both the approach and the economics to GPU manufacturing has changed massively since then, and that's not yet factoring in any Covid related impact.
 
Last edited:
@melmac
The economics of it now are much different to 2012.
In the Kepler era 28nm per wafer costs (at the time) were lower, compared to current nodes.
Plus Kepler dies where on average also much much smaller, especially those serving the sub $500 market you mention.

For comparison the sub $500 Kepler desktop segment (effectively the full initial GPU stack - come to Titan later):
GK107 - 118mm : 640 to 650
GK106 - 221mm : 650ti to 660
GK104 - 294mm : 660ti to 680

If you take the largest die at 294mm of the GK104 (680) and compare for equivalents in the 3000 series line-up, you are left with only:

GA106 - 276mm : 3050 and some 3060 models, nothing else.

That's only 2 out of 9 Ampere models, both of which are lower end of the stack Ampere GPU's.
The remaining 7 of 9 Ampere models are significantly larger die size.

GA104 - 392mm : 3060ti/3070
GA102 - 628mm 3080/3090ti

For some extra context, if we look at above $500 priced Kepler:
GK110 Titan was 561mm

The Titan was massively subsidized by a super computer contract, and was only released 1 year later to the GPU market than 640-680 kepler models.
In contrast the Ampere GA102 bohemoth was released straight out the gate - by now they knew they could push the die size and produce this, sell at an extravagant price range and that there are gamers who would pay it.
Both the approach and the economics to GPU manufacturing has changed massively since then, and that's not yet factoring in any Covid related impact.

I am only taking their margins from before they released the Titan and 7xx cards, i.e. 2012. Because after they released the Titan and 7xx cards their Margins went up.

Their margins were higher in Pascal. And their Margins have higher since then.

Actually I am not sure what you are trying to say. This isn't about comparing Nvidia's margins in 2012 vs their margins now. Because their margins have increasing since 2012. One of the reasons I used Kepler in 2012 because that's the lowest their margins have been in the last 10 years. It looks like you are trying to say that costs are increasing so margins would be lower. Well, that doesn't seem to be the case.

The discussion here isn't really about Nvidia's margins. It's just wondering whether AMD now have more margins on CPUs or GPUs. But, because AMD lump CPUs and GPUs together it's hard to work out. I am just using the Margins that the mainly CPU maker(Intel) and the mainly GPU maker(Nvidia) to show that GPUs seem to have higher margins overall. Hence my comparison to Kepler above. If you use any of the later years that gap widens.

And just a note on your rising costs data above. Those rising costs are the same for CPUs.
 
I am only taking their margins from before they released the Titan and 7xx cards, i.e. 2012. Because after they released the Titan and 7xx cards their Margins went up.

Their margins were higher in Pascal. And their Margins have higher since then.

Naturally margins would climb up with the 7xx series and Titan re-releases as they continued flogging the same Kepler dies on a mature 28nm.
Pascal also followed a dominantly smaller die focus with die sizes at 132mm, 200mm, 314 mm and was able to service the sub $500 market effectively.

Actually I am not sure what you are trying to say. This isn't about comparing Nvidia's margins in 2012 vs their margins now. Because their margins have increasing since 2012. One of the reasons I used Kepler in 2012 because that's the lowest their margins have been in the last 10 years. It looks like you are trying to say that costs are increasing so margins would be lower. Well, that doesn't seem to be the case.

The discussion here isn't really about Nvidia's margins. It's just wondering whether AMD now have more margins on CPUs or GPUs. But, because AMD lump CPUs and GPUs together it's hard to work out. I am just using the Margins that the mainly CPU maker(Intel) and the mainly GPU maker(Nvidia) to show that GPUs seem to have higher margins overall. Hence my comparison to Kepler above. If you use any of the later years that gap widens.

And just a note on your rising costs data above. Those rising costs are the same for CPUs.

It was admittedly an out of left field bullet point info dump, but to clarify, I am not weighing in on one side or other on the argument as to which product type, CPU or GPU, has better margins.
Rather I am using Kepler products in comparison with Ampere to highlight a key factor that should be taken into account which is die size (specific to the times), and some basic pricing conditions (specific to the times) that are relevant and shouldn't be ignored, but are seemingly glossed over/not entered into consideration. To quote

But, think about it another way. Intel deal with mainly CPUs. Nvidia deal with mainly GPUs. Intels margins always hover around the mid 40s. Now, lets go back to Pascal and Kepler for Nvidia. Their margins back then were mid 50s. Remember Kepler, most of the GPUs were under £500.
That would lead me to believe that there are higher margins in GPUs.
I am quoting your last sentence, because I knew you would come back with that argument about Nvidia selling at $600 or more. That's why I picked the margins from Kepler where most of their GPUs apart from the 690 were under $500. Yet they still made margins in the mid 50s

When you brought up Kepler you implied you can deduce from business margins combined with an upper boundary sale price ($500) of the full GPU stack that you can make a general determination about their relative cost & subsequent margins in GPU vs CPU products. And with Kepler 2012 acting as some sort of baseline indicator of achievable margin that can be loosely extrapolated to todays marketplace, and across to another competitor.
This is a lot of uncertainty and variables involved there and ignores one the main influences. The achievable margins at X price point for a product is hugely impacted by the die size, and is ultimately a reflection of the product strategy deployed at that particular time. You can see the massive contrast between the pricing conditions, as well as the differing physical characteristics of the Kepler vs Ampere products that Nvidia chose to pursue in each instance. Ultimately the margins of CPU/GPU's during Kepler's time based on Kepler's pricing and Kepler GPU era specific product strategy cannot be used to extrapolate with any meaning the potential or likely margins in CPU/GPU today.
 
Last edited:
Naturally margins would climb up with the 7xx series and Titan re-releases as they continued flogging the same Kepler dies on a mature 28nm.
Pascal also followed a dominantly smaller die focus with die sizes at 132mm, 200mm, 314 mm and was able to service the sub $500 market effectively.



It was admittedly an out of left field bullet point info dump, but to clarify, I am not weighing in on one side or other on the argument as to which product type, CPU or GPU, has better margins.
Rather I am using Kepler products in comparison with Ampere to highlight a key factor that should be taken into account which is die size (specific to the times), and some basic pricing conditions (specific to the times) that are relevant and shouldn't be ignored, but are seemingly glossed over/not entered into consideration. To quote




When you brought up Kepler you implied you can deduce from business margins combined with an upper boundary sale price ($500) of the full GPU stack that you can make a general determination about their relative cost & subsequent margins in GPU vs CPU products. And with Kepler 2012 acting as some sort of baseline indicator of achievable margin that can be loosely extrapolated to todays marketplace, and across to another competitor.
This is a lot of uncertainty and variables involved there and ignores one the main influences. The achievable margins at X price point for a product is hugely impacted by the die size, and is ultimately a reflection of the product strategy deployed at that particular time. You can see the massive contrast between the pricing conditions, as well as the differing physical characteristics of the Kepler vs Ampere products that Nvidia chose to pursue in each instance. Ultimately the margins of CPU/GPU's during Kepler's time based on Kepler's pricing and Kepler GPU era specific product strategy cannot be used to extrapolate with any meaning the potential or likely margins in CPU/GPU today.

Yes, people keep telling me about die sizes and that CPUs have bigger margins because they are cheaper to make. But, that doesn't seem to show up in the financials.

As I said I only picked Kepler because it was the lowest margins in 10 years by the biggest GPU company. Yet, that margin is more than the best margins in the last 10 years of the biggest CPU company. I could have picked Ampere, Pascal, Turin and my point would still remain the same. The GPU manufacturer has bigger margins the CPU manufacturer.

At the end of the day, CPUs sell in way more numbers that discrete GPUs do. Even if the enthusiast CPUs were a higher margin, there are too many CPUs been sold at tiny margins to offset any of those sold at the higher margins. Whereas Discrete GPUs, don't sell in the same kinds of numbers as CPUs, so it's harder for the lower priced GPUs to drag the overall margins down.

And It can't be just about Die size and BOM. Surely It's about the market they are aimed at too. Which is why I said that if you just counted the enthusiast CPUs, then, yeah, AMD's CPUs would probably have higher margins than their GPUs, Maybe ;)

EDIT: Anyway, sorry for the off topic posts. It doesn't really matter, was just curious. :)
 
Last edited:
First time hearing of this site . Any more info on who they are
it's a one man band site

SkyJuice
Technology enthusiast wielding a keen sense of observation for all things in the Semiconductor Industry. Tweet @SkyJuice60
 
A 5950X amounts to about 350mm^2 of die space, it has no fans, no shroud, no cooler, no memory IC's, little in the way of a PCB.

Right now its £500. https://www.overclockers.co.uk/amd-...hz-socket-am4-processor-retail-cp-3c9-am.html

This, https://www.overclockers.co.uk/powe...ddr6-pci-express-graphics-card-gx-1a3-pc.html an RX 6700XT is 335mm^2, with everything the 5950X doesn't have its also £500.

They can sell the 5950X in to a supply chain for £400, who sell it to OCUK for £450, who sell it to you for £500

When Powercolor are selling that GPU to you for the same money after adding all the ancillaries, building it and selling it to a supply chain, then OCUK before it gets to you, how much do we think AMD sold Powercolor the chip for? Its not £400 :)

Then it means the CPU price is waaaaaay to high! :D
 
A CCD on a $20,000 300mm Wafer costs $26
A 6700XT on a $20,000 300mm Wafer costs $78
A 6800/XT - 6900XT on a $20,000 300mm Wafer costs $192

A 5600X only has 1 CCD, its about 260mm^2, about 180mm^2 of that is a 12nm IO die from GloFo, it probably costs AMD less than the 7nm TSMC CCD at about 80mm^2, about $15, $10 for other materials, $10 for assembly, about $60 / $70 to get it to a sellable state. At $200 retail AMD will probably sell that for about $120.
Last i knew the price of 1GB GDDR5 Memory IC's was about $10 each if you buy them in large bulk, 2GB GDDR6 are probably a good deal more expensive than that, especially today, lets say they are $20 each, the 6700XT has 6 of those, that's $120 for the memory IC's alone, you're already up to $200 before you assemble the PCB, MosFets are $2 / $3 each, controller chips, there are a couple of those, they can be $10+, cooler, fans, Shroud, $30, probably about $300 for a fully assembled 6700XT.
Now you know why both Nvidia and AMD are saying its almost impossible to make a sub $200 GPU, and they really would rather not be doing it anymore, the cost of all those components adds up.

If you're selling GPU's for $600, $700, $800 sure you can turn a good profit on them, which is why AMD stick with it, trouble is Nvidia sell many many many more of those than AMD do.

You've priced up the materials but forgot about labour and operating costs. AMD spend 2.85 $billion per year on R&D. I don't know how many years it took to design RDNA but as an example, Raja Koduri started work on Ark in 2017 so up to 5 years. That's many $billions in design costs which need to be recovered in GPU sales before they even turn a profit. AMD's head office in Santa Clara, California has 18,924 employee's so that's probably close to a $billion per year just to keep the office running.
 
Wogh wogh

So they cut corners and lowered the infinity cache ?
Pretty much. It looks like a repeat of RDNA 2, i.e. max out the margins and sell a more modest quantity of cards while rest of wafer capacity goes towards CPUs. Guessing they'll remain <=30% market share vs Nvidia. Doesn't look like AMD is willing to have that battle just yet. I'm also speculating this means they'll still be firmly behind Nvidia as it pertains to RT (and AI, that's without question). :(
 
If they get +50% performance per watt, (as they told their investors) they should do well. (Unless they massively cut TDP's)

For VR, I prefer the frame timing of my 6800XT over my 3080Ti, but the 3080Ti has just enough grunt to run my sims at night with lowered settings and the 6800XT can't quite do it.

The 6800XT basically "feels" better in VR up to the point where it can't hold 90fps, then it's pretty bad. (I think Nvidia's reprojection works better so the 3080Ti remains playable when it starts to struggle)

Another 50% (for either card) should give me the grunt to run at night. If RDNA3 maintains the same solid frame pacing, and the price is good, AMD could get another sale from me.
 
Last edited:
Pretty much. It looks like a repeat of RDNA 2, i.e. max out the margins and sell a more modest quantity of cards while rest of wafer capacity goes towards CPUs. Guessing they'll remain <=30% market share vs Nvidia. Doesn't look like AMD is willing to have that battle just yet. I'm also speculating this means they'll still be firmly behind Nvidia as it pertains to RT (and AI, that's without question). :(

Everyone said there is no way AMD can double performance of the 5700XT, because it would have to be too big and use too much power.

Well, they did, and its not a massive 600 watt GPU, is it? Despite being on the same 7nm node.

Lets not fall in to the same trap, AMD are on such a role right now with developing new technologies the traditional way of measuring up the size of the new chip vs the old one and deducting a node shrink are out of the window.
The reason its become so predictable is because no one is really innovating any-more, Well, Nvidia are to some extent.
But in the last few years AMD certainly have, they are developing a war chest of technologies and IP that gives them an answer for anything. They are turning what we are used to in the last 20 years, since AMD was fighting just to survive, on its head, today its more like the 1990's, only difference is back then there was about 10 of them innovating and pushing boundaries, its like AMD went in to a coma in 2005 and woke up again in 2017 to get right back to it.

You don't know what they are going to do next, that's the fun of it, just enjoy it :)
 
Last edited:
Wogh wogh

So they cut corners and lowered the infinity cache ?
so we don't know that, it could be they simply landed on the most efficient permutation assuming all of this is true and what makes this guy's guess better than Graymon, Kopite(Marketing dept.)kimi, RGT, MLID, and any other person with an opinion?

what we need to know is if AMD are intending to pass any of this price efficiency onto the consumer, or if they are intending to bank the lot
 
You've priced up the materials but forgot about labour and operating costs. AMD spend 2.85 $billion per year on R&D. I don't know how many years it took to design RDNA but as an example, Raja Koduri started work on Ark in 2017 so up to 5 years. That's many $billions in design costs which need to be recovered in GPU sales before they even turn a profit. AMD's head office in Santa Clara, California has 18,924 employee's so that's probably close to a $billion per year just to keep the office running.

It's been estimated Intel has already spent $3.5billion for its desktop Arc GPUs even though it's basically sold nothing yet

They would need to sell at least 10 million GPUs at $350 a pop just to recover from existing sunk costs

He also forgot about transistors; you can't just say that one 200m2 pice of silicon is worth the same as another 200m2 piece of silicon, not only can they do totally different products in different markets, you have different development costs and even if remove all those factors the number of transistors on each silicon can vary , so yields can vary and therefore cost to build does vary.

I'd think that much of the current prices is market related even though there are many factors at play; zen3 CPUs are pricey because they can be - they are still selling even at inflated prices so why lower them where as amd has a harder time moving GPUs so they need a lower price
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom