• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

When the Gpu's prices will go down ?

Its a big issue with gaming in general now - I was looking at buying D4,but I looked at the pricing at walked away. Nearly £100 for the full version,plus paid battlepass,microtransactions,etc.

I rarely buy a game at release, having been burned a few times.

I'll get Starfield, which has reasonable min specs, but doubt I'll ever have that "OMG I must upgrade so I can play Bioshock" feeling again if pricing is like this.

If it costs the same as a monthly mortgage payment, it's just not on.
 
A significantly higher throughput of sales at a slightly lower margin would equate to higher overall revenue. Perhaps also some incidental branch affinity from buyers. Also, depending on manufacturing levels, it'll move stock which is a business liability.
That's sales not market share, i assume they've ran the numbers and have a fairly good idea of where the optimal price for them in terms of revenue is.
 
Part of this must be their massive mis-judgement to have gone chiplet with both Navi 31 and Navi 32. The later is still MIA but rumours were always that it would be chiplet.

A far saner, safer risks contained thing would have been:
Navi 30 - huge 500mm² plus GCD selling at a halo price
Navi 31 - 5nm monolith, maybe 450mm² selling at the 6800/6800 XT/ 6900 XT prices
Navi 32 - 5nm monolith, maybe 350mm² selling at the old 6700 XT prices
Navi 33 - 6nm monolith. As is or may a bit larger to offer an actual improvement on 6650 XT.

Instead, it is almost like some engineer went "hey we've cracked GPU chiplets, let's show off".


AMD are certainly not our friends but there is "no good" and there is Nvidia.

Think of any possible bad anti consumer move and Nvidia have probably written the textbook on it. Bumpgate was them being chancers by not standing by their products but at least choosing the wrong solder wasn't done on purpose. (But that made little difference to the millions of people affected.)

It is all the other shady things they did when the started, when it was only two left standing (so them and ATI) by cheating until caught, but software lock-in and sponsored nonsense like the excess tessellation, by starting paid shills in social media, by leaking stuff about competing stuff and spreading it like crazy (Hawaii is still remembered for the poor stock cooler to this day - everyone has forgotten it that little 430mm² decisively beat the 560mm² GK110).

I don't like it when AMD starts playing dirty with sponsorship etc., but it is important to remember who started this stuff. It's like the age "all politician are crocked and liars" - but if you vote for a proven crock and liar expect to be more than entertained!

I used to care more when younger. Getting close to 40 and just don't care as much any more. Happy to reward companies that do better. I did with AMD and Ryzen for example. Was happy to get away from Intel. But now I honestly would not mind going Intel again in a few years time when I upgrade unless AMD actually start to price their CPU's better.

Basically it comes down to I just go with what I fancy at the time and have no loyalty no more as I feel they have zero loyalty to their customers. I already rewarded AMD plenty in the past. I have spent more money on AMD hardware than Intel or Nvidia.
 
I rarely buy a game at release, having been burned a few times.

I'll get Starfield, which has reasonable min specs, but doubt I'll ever have that "OMG I must upgrade so I can play Bioshock" feeling again if pricing is like this.

If it costs the same as a monthly mortgage payment, it's just not on.

I think Starfield will be more taxing on CPUs than dGPUs IMHO,especially if you start building stuff in the game or go into areas with more NPCs.
 
Has anyone got any evidence as to how much the latest cards cost to manufacture vs the retail pricing so we actually know the profits involved? Surely Nvidia have much more benefit in terms economies of scale and so are likely to be profiting more especially as their fan base will pay the asking regardless of the stupid prices being asked. You can't blame AMD for following the pricing trend as their development and manufacturing costs are likely to be greater so they need to make hay whilst the sun shines. But without having some ideas of the actual facts and figures especially given increasing overheads and threatened restrictions of silicone it's difficult to reach an informed understanding of the profiteering if any involved.
 
Has anyone got any evidence as to how much the latest cards cost to manufacture vs the retail pricing so we actually know the profits involved? Surely Nvidia have much more benefit in terms economies of scale and so are likely to be profiting more especially as their fan base will pay the asking regardless of the stupid prices being asked. You can't blame AMD for following the pricing trend as their development and manufacturing costs are likely to be greater so they need to make hay whilst the sun shines. But without having some ideas of the actual facts and figures especially given increasing overheads and threatened restrictions of silicone it's difficult to reach an informed understanding of the profiteering if any involved.
Only some of the info is public.
Die sizes are easy.
Defect rates per cm² are almost public as at some stages TSMC etc. all like to boast about them.
Therefore yields per wafer (and totally ignoring any partially good dies) can be obtained from the likes of https://isine.com/resources/die-yield-calculator/
Cost per wafer is almost top secret.
I've done some of this in the past, and updated for the current gen we'd get something like this:
TfjxSAB.png

That's before VRAM, board etc., but it gives us an idea.
Also shows much more margins there is in CPUs.
No ideas on actual wafer costs but 7nm must now be way down from its heights and $6k is probably not far off. Ampere is not on the above but Samsung's 8nm must have been cheap, or else why bother?
 
Last edited:
Probably go up to be honest. I'll hope the 7900 XT falls dramatically in price. My next build has a strong interest in the Streacom SG10 and probably only works with reference cards.
 
Cost per wafer is almost top secret.
Not sure how much faith you want to put into it as like your handy graph (well done BTW :)) they're educated estimates...
ZQlUWcaJAdMsWfkP.jpg
 
Last edited:
Only some of the info is public.
Die sizes are easy.
Defect rates per cm² are almost public as at some stages TSMC etc. all like to boast about them.
Therefore yields per wafer (and totally ignoring any partially good dies) can be obtained from the likes of https://isine.com/resources/die-yield-calculator/
Cost per wafer is almost top secret.
I've done some of this in the past, and updated for the current gen we'd get something like this:
...
That's before VRAM, board etc., but it gives us an idea.
Also shows much more margins there is in CPUs.
No ideas on actual wafer costs but 7nm must now be way down from its heights and $6k is probably not far off. Ampere is not on the above but Samsung's 8nm must have been cheap, or else why bother?

Does 65% yield mean 65% full dies, 65% cut down that could be 4090s or 65% total viable dies but might need to be cut down to e.g. 4070 levels to work?
 
Not sure how much faith you want to put into it as like your handy graph (well done BTW :)) they're educated estimates...
ZQlUWcaJAdMsWfkP.jpg
Okay, taking $17,000 and making that column a bit narrower (everyone knows what pw means, right?), and adding AD107 (I've never gone that low before but then this gen x107 is the x060 cards :(

W9lcf6h.png

Plenty of margins either way.
R&D and masks etc. have gone up, but the biggest mystery is precisely that: AMD's R&D and other fixed costs have gone up too*, but after that huge fixed costs they seem totally unwilling to go for volume. It's almost like the are playing in the dGPU market just to get experience for the next console update.

The choice to only have Navi 33 on 6nm and not to go for a 400mm² 6nm monolith as a true volume part is very puzzling from my armchair silicon strategists PoV!

*Yes, chiplets might save something there.
 
Does 65% yield mean 65% full dies, 65% cut down that could be 4090s or 65% total viable dies but might need to be cut down to e.g. 4070 levels to work?
65% full dies.
Therefore yields per wafer (and totally ignoring any partially good dies)
For such a huge die they will be able to salvage tons.
Currently they don't have any other RTX consumer cards, but Nvidia are great at salvaging and have the volumes to be able to, so eventually they will have things to do with some or many of those 35%.
 
65% full dies.

For such a huge die they will be able to salvage tons.
Currently they don't have any other RTX consumer cards, but Nvidia are great at salvaging and have the volumes to be able to, so eventually they will have things to do with some or many of those 35%.

I thought they used failed dies on lower cards/chips? That is one of the main reason top tier chips are so expensive as there is no margin for failure.
 
65% full dies.

For such a huge die they will be able to salvage tons.
Currently they don't have any other RTX consumer cards, but Nvidia are great at salvaging and have the volumes to be able to, so eventually they will have things to do with some or many of those 35%.

Yeah this is what I was thinking. You see the evidence of it later in the generation e.g. the 3080 12GB which was really a cut down 3090 having sat on those defective-but-working chips until they had enough of them to create a product out of them and similar with the bigger ti variants as well.

I suppose it's probably difficult to get accurate figures around binning etc. though to estimate the costs/savings of doing this.
 
Not sure how much faith you want to put into it as like your handy graph (well done BTW :)) they're educated estimates...
ZQlUWcaJAdMsWfkP.jpg
An awful lot of stuff is going up in price. The next iPhone is going to be a expensive.
 
Does 65% yield mean 65% full dies, 65% cut down that could be 4090s or 65% total viable dies but might need to be cut down to e.g. 4070 levels to work?
Vast majority of the time they'll just end up in the bin as waste. While you could recycle a failed die and use it for a lower tier product it's just not cost effective to do that most of the time, the time it would take for a technician to find exactly what part isn't working is far better spent on doing something else, especially when you're talking about something that may only cost $300.

You do get recycled silicon, the 5600X3D and IIRC one of the 20 or 30 series cards (GN did a video on it) are examples of recycling higher end parts to use on lower tier products, but they tend to be the exception rather than the rule.
The choice to only have Navi 33 on 6nm and not to go for a 400mm² 6nm monolith as a true volume part is very puzzling from my armchair silicon strategists PoV!
My suspicion is they went with chiplets now to iron out problems for the future, they had to pull the trigger at some point due to the increasing costs of smaller nodes and the maximum reticle size. I dread to think what a 300mm² 3nm chip would cost.
An awful lot of stuff is going up in price. The next iPhone is going to be a expensive.
Yea but it's Apple so if it didn't cost the earth people would feel short-changed. ;)
 
Last edited:
Vast majority of the time they'll just end up in the bin as waste. While you could recycle a failed die and use it for a lower tier product it's just not cost effective to do that most of the time, the time it would take for a technician to find exactly what part isn't working is far better spent on doing something else, especially when you're talking about something that may only cost $300.

You do get recycled silicon, the 5600X3D and IIRC one of the 20 or 30 series cards (GN did a video on it) are examples of recycling higher end parts to use on lower tier products, but they tend to be the exception rather than the rule.

My suspicion is they went with chiplets now to iron out problems for the future, they had to pull the trigger at some point due to the increasing costs of smaller nodes and the maximum reticle size. I dread to think what a 300mm² 3nm chip would cost.

I'm fairly sure they'll automate most of the testing. I would be incredibly surprised if they didn't just have a test rig that can verify e.g. which SMs, memory controllers etc. are functioning within spec or not, as that's all they really need to do. They don't need to find the individual transistor or whatever that is at fault. Each "only" $300 part will add up to millions.
 
I'm fairly sure they'll automate most of the testing. I would be incredibly surprised if they didn't just have a test rig that can verify e.g. which SMs, memory controllers etc. are functioning within spec or not, as that's all they really need to do. They don't need to find the individual transistor or whatever that is at fault. Each "only" $300 part will add up to millions.
They do but that's just a fail/pass test and while they may know a particular logic block has failed it's not as simple as just fusing that part, not unless you designed it in the first place for redundancy. Unless you designed it so you can disconnect one logic block without it causing problems further up/down stream.

Like i said they can design, test, and fuse off failed parts but it boils down to whether it's more cost effective to do that or just throw them away. (e: I'll add that we're talking about GPU silicon here, down binning of CPUs is more common AFAIK).
 
Last edited:
They do but that's just a fail/pass test and while they may know a particular logic block has failed it's not as simple as just fusing that part, not unless you designed it in the first place for redundancy. Unless you designed it so you can disconnect one logic block without it causing problems further up/down stream.

Like i said they can design, test, and fuse off failed parts but it boils down to whether it's more cost effective to do that or just throw them away.

GPUs are pretty good for designing for redundancy because they're highly parallel processors, and that means having lots of the same type of units which do work at the same time. If you look at an architecture block diagram you'll basically see a lot of blocks that are effectively copy+pasted a bunch of times.
 
GPUs are pretty good for designing for redundancy because they're highly parallel processors, and that means having lots of the same type of units which do work at the same time. If you look at an architecture block diagram you'll basically see a lot of blocks that are effectively copy+pasted a bunch of times.
They're not because one logic block depends on the up/down stream logic block, if down binning GPUs was common you wouldn't have AD107, 6, 4, 3, 2. You'd have things like AD102-1 or AD103-2 and you'd have 609 mm² dies being used on 4080's along with the normal 379 mm² dies and you'd have varying sizes of boards to accommodate all the different sizes of dies along with their differing pinouts.
 
Last edited:
Back
Top Bottom