• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Any news on 7800 xt?

That was the point of your argument - that Navi31 chiplets are cheaper to manufacture than nvidia's comparable monolithic design (which is AD103). It's the only way to count it as a win, other than as an engineering exercise (which was a reason you added later, after the post I replied to), but you provide neither evidence nor supporting argument and later stated clearly that you didn't know.



Because it's a larger and far more complex design.

I'm done with this. You can count that as another win if you like.

That was the point of your argument - that Navi31 chiplets are cheaper to manufacture than nvidia's comparable monolithic design (which is AD103). It's the only way to count it as a win, other than as an engineering exercise (which was a reason you added later, after the post I replied to), but you provide neither evidence nor supporting argument and later stated clearly that you didn't know.

Again no, i did not say that, this is you interpreting something in meaning that i said, i don't know what the costs are 5nm and 6nm with an overall die that's larger than a 4nm die.
Is it? you're just making blanket statements here, you don't know the costs, i don't either.

Because it's a larger and far more complex design.

Now you're going to have to cite costs. You're doing exactly what you falsely accused me of. Right out of the Goebbels playbook, "accuse them of what you do"
 
Last edited:
Out of interest here are the material costs according to these wafer costs.

4nm is stated as $18,000 to $20,000, i've gone with $19,000, 6nm isn't cited but 5nm is $16,000 with 7nm $10,000, so again i've gone for the middle at $13,000.


A103 379mm: 147 of those: $129 each.
Navi Logic 300mm: 190 of those: $84 each.
MCM chips 37mm: 1645 of those: $7.9 each.

MCM chips X6 = $47 + $84 = $131

A102 is $211, you get 90 of those out of a 300mm wafer

So the RX 7900 XTX and RTX 4080 cost about the same in wafer costs, however the complex packaging costs for the 7900 XTX will cost a chunk more, how much? Probably only AMD knows, its also a more expensive GPU with 24GB VRam vs 16GB on the 4080. Nvidia are currently selling that for 18% more.

Someone said there are 200 million GPU's on Steam, i don't know how true that is but if we go with that.

4080: 0.47%, 940,000 GPU's, $1.28bn in revenue at $1200
7900XTX: 0.17%, 340,000 GPU's, $340m in revenue at $1000
 
Last edited:
Out of interest here are the material costs according to these wafer costs.

4nm is stated as $18,000 to $20,000, i've gone with $19,000, 6nm isn't cited but 5nm is $16,000 with 7nm $10,000, so again i've gone for the middle at $13,000.


A103 379mm: 147 of those: $129 each.
Navi Logic 300mm: 190 of those: $84 each.
MCM chips 37mm: 1645 of those: $7.9 each.

MCM chips X6 = $47 + $84 = $131

A102 is $211, you get 90 of those out of a 300mm wafer

So the RX 7900 XTX and RTX 4080 cost about the same in wafer costs, however the complex packaging costs for the 7900 XTX will cost a chunk more, how much? Probably only AMD knows, its also a more expensive GPU with 24GB VRam vs 16GB on the 4080. Nvidia are currently selling that for 18% more.

Someone said there are 200 million GPU's on Steam, i don't know how true that is but if we go with that.

4080: 0.47%, 940,000 GPU's, $1.28bn in revenue at $1200
7900XTX: 0.17%, 340,000 GPU's, $340m in revenue at $1000
A 4080 costs between $300-350 to make so I’d say the navi 31 cards cost around $400, AMD wouldn’t have known Nvidia was going to up the 4080 price by $500 so AMD would have targeted around $700 as the final selling price to get a decent profit for Navi 31 and based production costs around that.
 
Final price is not what AMD get for it, there still packaging and shipping costs before they sell it in to a supply chain who take their cut out of that final selling price and then the retailer take their cut.

The 4080 was launched in November 2022, the 7900 XTX in December.

I think AMD will have wanted the same price as the 6900XT, $999.99.

Of course the real cost is development costs.
 
Last edited:
...
A103 379mm: 147 of those: $129 each.
Navi Logic 300mm: 190 of those: $84 each.
MCM chips 37mm: 1645 of those: $7.9 each.

MCM chips X6 = $47 + $84 = $131

A102 is $211, you get 90 of those out of a 300mm wafer
....
You'd really need to factor in defect density into those cost as well, the bigger your dies the more having a defect pushes up your cost.

Basically if you had something like a 10% defect density across a 12" wafer and your dies are 400mm² you increase your chance of getting a non-working die vs 40mm² dies.

You can see what i mean by playing around with this.
 
You'd really need to factor in defect density into those cost as well, the bigger your dies the more having a defect pushes up your cost.

Basically if you had something like a 10% defect density across a 12" wafer and your dies are 400mm² you increase your chance of getting a non-working die vs 40mm² dies.

You can see what i mean by playing around with this.

Yes that is true.

But i'm not going to do it all again :D I do think Navi 31 is more expensive to make than A103, but with everything taken in to consideration there isn't more than 10% in it.

The 4080 is certainly a well balanced GPU, at least compared to A102 which is near 60% larger for 25% more performance, Nvidia did that by cutting out a chunk of the cache, which seems to give it lower RT performance relatively, and cutting 2 of its IMC's out, which limits it to 16GB, unless they use the clamshell approach to give it 32GB, which they aren't going to do.

Others might say $1200 for a 16GB GPU is a bit stingy.... but its monolithic and to get the size down that's what they have to do.
 
Yes that is true.

But i'm not going to do it all again :D I do think Navi 31 is more expensive to make than A103, but with everything taken in to consideration there isn't more than 10% in it.

The 4080 is certainly a well balanced GPU, at least compared to A102 which is near 60% larger for 25% more performance, Nvidia did that by cutting out a chunk of the cache, which seems to give it lower RT performance relatively, and cutting 2 of its IMC's out, which limits it to 16GB, unless they use the clamshell approach to give it 32GB, which they aren't going to do.

Others might say $1200 for a 16GB GPU is a bit stingy.... but its monolithic and to get the size down that's what they have to do.
I'm sure I seen a cost break down including defective chips showing the XTX to be a bit cheaper. either on twitter or a YT vid.
 
You'd really need to factor in defect density into those cost as well, the bigger your dies the more having a defect pushes up your cost.
This, for rdna3 ,

Rdna3 is a stepping stone for future amd graphical product stacks, the likelihood is that in future they hope that all they need to do is add more dies onto a card for more performance, yes it does need special board layouts for the infinity bridges between the components, but it's still ultimately cheaper to produce a single chip across the range of cards and then add or remove dies to suit the model ranging.

3dfx carried this out in their later cards, just adding more GPU chips onto the cards for more performance, it did kinda work, but the chips were already outdated, I'm surprised Nvidia didn't improve upon this when they bought what was left of 3dfx, as what I believe is that the voodoo 6000 which was never released was supposed to be enough to beat the geforce 2 and stay competitive with thw geforce 3 (albeit with limitations on t&l etc)
 
The 4080 is certainly a well balanced GPU, at least compared to A102 which is near 60% larger for 25% more performance, Nvidia did that by cutting out a chunk of the cache, which seems to give it lower RT performance relatively, and cutting 2 of its IMC's out, which limits it to 16GB, unless they use the clamshell approach to give it 32GB, which they aren't going to do.
The 4090 is 33% faster in raster and 40% in RT it’ll probably be even further ahead when the next gen cpus are arrive as some games do bottleneck even at 4k. You also have to remember the 4090 is cut down by quite a lot and is only using 89% of the full die so Nvidia could release a full fat 4090ti/titan that would be another 15% faster.
 
Yeah, 27% at 4K. near enough, still a 57% larger die.

RdbaOKs.png
 
TPU's results are not wrong, but I also cringe a bit every time I see them posted. In the same way that one could say a 4090 was twice as fast as a 7900xtx if all you did was benchmark some cherry picked path traced game. Not that TPU is cherry picking, but the point is sample bias is real.

The reason TPU's numbers often look lower than other sites is because of their game sample, and also their test rig. Their RTX4090 review that they pull their numbers from was done using a 5800x and they do not re-test all their old GPU's every time a new release comes out, they do what LTT accused other reviewers of which is pulling old data for comparisons. The 5800x combined with their games sample means there is quite a few games where the 4090 appears to perform quite poorly that pull down the averages. TPU still uses games like Witcher 3, Divinity 2, BFV, Civ VI, TWHIII and Hitman 3 to benchmark GPU's, even though its a old, its very CPU limited and they used a 5800x, so naturally the 4090 performance very poorly in those and they use that to produce the average performance chart that's commonly posted
 
Last edited:
Yeah, 27% at 4K. near enough, still a 57% larger die.

RdbaOKs.png
TPU used a 5800X for 4090 testing.
They also used a weird ram config.
4000/C20
 
Last edited:
They used a 13900K unless stated 5800X as you see in the slide.

 
If the 6750 GRE is a thing and likely £350.

Let's be honest £375.

7700 £450... probably 500
7800 £600... I'll go £650

The pessimistic pricing of cards in recent years has killed any optomisim or excitement for launches I have now.

I do feel like at this point the most accurate thing is thinking of the dumbest pricing possible and you're almost there.
 
Last edited:
I know people don't want to hear it but Nvidias mindshare is so strong that for AMD to make serious market they have to offer 30% more performance per dollar to make people switch. That could come in two forms: Either you launch an AMD GPU with the same performance as Nvidia but it's priced 30% lower OR you launch at the same price but it's 30% more performance


The reason rdna3 isn't doing much to hurt Nvidia is because so far AMD has chosen to launch at the same performance tier at Nvidia and only price it 10% cheaper when they need to price it 30% cheaper or offer more performance
 
Last edited:
Back
Top Bottom