• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 20 Faster Than An Nvidia RTX2080Ti?

Transistors have everything to do with it, the less you have the smaller the die. If so many were not dedicated to RTX and DLSS the dies would be smaller and cheaper.

As to efficiency I am talking raw performance as measured in game or synthetic benches. For this there is very little to separate Pascal Turing or Volta.

NVidia have screwed up badly with Turing offering features that are extremely poor value for money.



The transistors have no cost, only die area. And as I said, DLSS and RTX have added 8% to the die area, so the die itself which is about 35-40% of the GPU manufacturing cost (which itslef is only about 1/3rd of the selling price at the most). So adding RTX and DLSS has increased production costs by well under 5% in terms of die size. The R&D costs of RTX is another matter and a different discussion.



those extra transistors are not only used for RTX and DLSS, they are used for a lot of things as I mentioned. The extra cahce sizes alone probably account for an extra billion transistors. Turing is a very different architecture to Pascal, even if the raw performance numbers don't always bear that out. That is because there are always other bottlenecks. Under certain scenarios Turing shows impressive IPC gains over Pascal What we fail to see in Turing is the additional 50% gains afforded by simply moving to a new process, instead we get the 30% gains from a new architecture in isolation.

https://www.techpowerup.com/reviews/Zotac/GeForce_RTX_2080_AMP_Extreme/31.html
2080 is 30% faster than 1080 at 4K (66/94%).

That 30% comes from architectural changes and CUDa core count increases. That requires a significant chunk of those new transistors and increased die area.

Not least, the 2080 at 15% more CUDA cores than than the 1080 and the CUDA cores take up the most transistor budget. At the same number of CUDA cores, the Turing cores have higher complexity and use more transistors. the 2080 is faster than the 1080TI because of the core improves and architecture changes.

L1 cache capacity increased from 24KB to 96KB. L2 cache increased form 3MB to 6MB. There are new features like concurrent floating point and integer math. All of this stuff takes a lot of additional transistors.



You then have design decision choices as well. pascal didn't offer double rate FP16 like Vega does. This is useful in the future. Wit the RTX Turing cards, nvidia uses the Tensor cores to do fast FP16 at rates much higher than double rate FP16, so the existence of Tensor cores reduces transistor budget used in the CUDA cores for FP16 support. Interestingly, With the GTX Turing without tensor cores, nvidia had to add specific FP16 support. I don;t have time but would be interesting to see the 1660Ti transistor and architecture changes for comparison.




Bottom line is , DLSS and RTX adding way under 5% of the production costs to Turing. Turing is expensive partly because production costs are just increasing anyway, e.g. Vram is more expensive, much more is simply due to exponentially increasing R&D costs (a 7nm GPU will cost 3X 16nm GPU to design and bring to production), and simply Nvidia increasing Profit margins.

RTX undoubtedly has made Turing more expensive, but undoubtedly that is largely due to marketing, not production.
 
The transistors have no cost, only die area. And as I said, DLSS and RTX have added 8% to the die area, so the die itself which is about 35-40% of the GPU manufacturing cost (which itslef is only about 1/3rd of the selling price at the most). So adding RTX and DLSS has increased production costs by well under 5% in terms of die size. The R&D costs of RTX is another matter and a different discussion.



those extra transistors are not only used for RTX and DLSS, they are used for a lot of things as I mentioned. The extra cahce sizes alone probably account for an extra billion transistors. Turing is a very different architecture to Pascal, even if the raw performance numbers don't always bear that out. That is because there are always other bottlenecks. Under certain scenarios Turing shows impressive IPC gains over Pascal What we fail to see in Turing is the additional 50% gains afforded by simply moving to a new process, instead we get the 30% gains from a new architecture in isolation.

https://www.techpowerup.com/reviews/Zotac/GeForce_RTX_2080_AMP_Extreme/31.html
2080 is 30% faster than 1080 at 4K (66/94%).

That 30% comes from architectural changes and CUDa core count increases. That requires a significant chunk of those new transistors and increased die area.

Not least, the 2080 at 15% more CUDA cores than than the 1080 and the CUDA cores take up the most transistor budget. At the same number of CUDA cores, the Turing cores have higher complexity and use more transistors. the 2080 is faster than the 1080TI because of the core improves and architecture changes.

L1 cache capacity increased from 24KB to 96KB. L2 cache increased form 3MB to 6MB. There are new features like concurrent floating point and integer math. All of this stuff takes a lot of additional transistors.



You then have design decision choices as well. pascal didn't offer double rate FP16 like Vega does. This is useful in the future. Wit the RTX Turing cards, nvidia uses the Tensor cores to do fast FP16 at rates much higher than double rate FP16, so the existence of Tensor cores reduces transistor budget used in the CUDA cores for FP16 support. Interestingly, With the GTX Turing without tensor cores, nvidia had to add specific FP16 support. I don;t have time but would be interesting to see the 1660Ti transistor and architecture changes for comparison.




Bottom line is , DLSS and RTX adding way under 5% of the production costs to Turing. Turing is expensive partly because production costs are just increasing anyway, e.g. Vram is more expensive, much more is simply due to exponentially increasing R&D costs (a 7nm GPU will cost 3X 16nm GPU to design and bring to production), and simply Nvidia increasing Profit margins.

RTX undoubtedly has made Turing more expensive, but undoubtedly that is largely due to marketing, not production.

Going from 16nm to 12nm and packing more transistors in on larger dies leads to lower yields and increased cost.

This is quite funny reading this as my day job is looking down microscopes and yield improvement on very small products.:D


There is also a clue to the yields on Turing when you compare the cost of a RTX Titan to a FE 2080 Ti, the only differences are 24gb of VRAM and a full fat TU102 die.

I suspect getting fully functioning TU102 chips is a quite a low yield compared to the ones used in the 2080 Ti.
 
Going from 16nm to 12nm and packing more transistors in on larger dies leads to lower yields and increased cost.
16nm and 12nm are simply marketing terms, the difference is irrelevant to the comparison. 12nm is liekly slightly more expensive than 16nm which in part explains production cost differences.
You keep mentioning transistors, which is irrelevant to the cost. Transistors can be packed at different densities to achieve different thermal designs, but is also a result of using different optimization technique, designs and existing libraries.

You finally get to the relevant part, that there is a large die leading to lower yields. This is as I discussed earlier, die area increased due to RTX + DLSS was only 8%. that may reduce yields by more than 8% but the entire dire cost is 30-40% of the production costs, and that is for the big dies. On smaller chip the PCB, memory, HSF take up a bigger chunk.

So again, transistor count is irrelevant, only die size is important.

This is quite funny reading this as my day job is looking down microscopes and yield improvement on very small products.:D
But you don't look at silicon dies so your experience is irrelevant.

There is also a clue to the yields on Turing when you compare the cost of a RTX Titan to a FE 2080 Ti, the only differences are 24gb of VRAM and a full fat TU102 die.

I suspect getting fully functioning TU102 chips is a quite a low yield compared to the ones used in the 2080 Ti.

This has always been the case. Titans are always far more expensive. In fact look at Pascal, yields started so much worse than Turing that Nvcidia actually released 2 different Titans and a 1080ti that was basically a TitanX.

The price of Titans has absolutely nothing to do with production cost differences with the Ti model.



In all of your post not once have you actually addressed anything such as the increase in Cuda core counts, cache sizes and architecture changes that lead ot the transisotr count change.

You just have to accept, you had a theory that RTX greatly increased production costs. That theory is completely bogus. Many things increased production costs, GDDR6 is especially a big jump in price, as was GDDR5X before that.

You still seam to completely ignore that Nvidia is simply making more profit per Turing GPU sold.
 
That must be a guess, doubt Nvidia would publish it and it requires a heck of a lot of time and effort to price up the cost of each component from looking at a picture of a PCB and doing research, plus a fair bit of estimation
 
Give us the figures or are you just guessing what the margins are for Turing vs Pascal.

Firstly, you are making the argument that price difference is due to RTX&DLSS feature, so the onus is on you to provide evidence to support your claim.


I have given you figures, die size increase due to RTX & Tensor cores is 8%
 
That must be a guess, doubt Nvidia would publish it and it requires a heck of a lot of time and effort to price up the cost of each component from looking at a picture of a PCB and doing research, plus a fair bit of estimation


It is a measurement from IR photos of the Turing dies
 
Firstly, you are making the argument that price difference is due to RTX&DLSS feature, so the onus is on you to provide evidence to support your claim.


I have given you figures, die size increase due to RTX & Tensor cores is 8%

You are the one telling us that NVidia are making more profit per Turing GPU sold, give us your source and the figures please.

If you can not do this are any of your other points equally baseless.
 
Large dies would definitely raise costs by lowering yields. Look what small dies have done for for Ryzen. Add in lack of competition and Nvidia seem, quite cleverly, to be testing how high a price the market will stand.

Anecdotally it seems they're at the limit now, add some competition and the prices will tumble. How far? Who knows, but I think it's safe to say they'll have to stay fairly high to cover their costs.

If I was AMD I'd be trying to hit the £150-£450 market with decent performing, high profit cards and leave Nvidia to the crazy high end, low volume stuff.
 
Low price and high profit aren't really the same thing. The low volume high end has always had the highest margins and the high volume, low end has the narrowest margins.

A really good example is consoles, AMD most likely doesn't make much at all despite them having built over 100 million units by now
 
GDDR6 is especially a big jump in price, as was GDDR5X before that.
Big, how much though? Bet nit is not even a 3 figure sum, yet price is nearly double. As I said, most of the price increase on the xx80 Ti is likely pure profit.
 
Low price and high profit aren't really the same thing. The low volume high end has always had the highest margins and the high volume, low end has the narrowest margins.

A really good example is consoles, AMD most likely doesn't make much at all despite them having built over 100 million units by now

Low cost to manufacture and high volume, due the prices the masses can afford, will make the most money overall. I bet a lot of these halo products actually lose money given the tiny volume. Even on the enthusiast forums hardly anyone is running a 2080Ti let alone the Titan version compared to the size of the market.

£300 is an expensive GPU for your average casual gamer i.e. most gamers but they'll sell a huge amount more of those than an expensive to make £1200 2080Ti.

As for consoles, that was the really smart move. AMD now have their name associated with gaming in millions of consoles. Developers are working more now with their hardware. People always complain that games get more focus by developers on console than PC, it can only be good for AMD that there should be better optimised games for AMD hardware as a result.

They appear to be after the mindshare of the masses not the "elite" enthusiast gamer.
 
Low price and high profit aren't really the same thing. The low volume high end has always had the highest margins and the high volume, low end has the narrowest margins.

A really good example is consoles, AMD most likely doesn't make much at all despite them having built over 100 million units by now

But you've forgotten about sales volume: Product A has £10 margin, Product B has £100 margin. Product A outsells Product B 50:1. Which one makes the most profit?

And as you say, consoles are a good example: even if AMD only have £10 margin on every console SoC they've sold to Sony and Microsoft over the years, that means they've still made £1 BILLION in pure profit. Of course it won't be that much because of volume discounts and whatnot, but the cost of those SoC will already factor in R&D, manufacturing and shipping costs and then have a profit margin added.
 
You are the one telling us that NVidia are making more profit per Turing GPU sold, give us your source and the figures please.

If you can not do this are any of your other points equally baseless.



You are the one claiming that Turing is more expensive because of the large die size caused by RTX but you haven't provided any evidence of that.

The only evidence is that in fact the sum total of the tensor cores and RTX is only about 8% of the die area. 8% larger dies will have a marginal effect on yields but it absolutely cannot explain the prices differences. The price difference of GDDr6 alone would swamp that small increase.
 
Big, how much though? Bet nit is not even a 3 figure sum, yet price is nearly double. As I said, most of the price increase on the xx80 Ti is likely pure profit.


No numbers to hand right now but a few years ago the total VRAm costs would be around $40 on a high end GPU, you are now looking at $120, twice that for HBM2.
 
Large dies would definitely raise costs by lowering yields. Look what small dies have done for for Ryzen. Add in lack of competition and Nvidia seem, quite cleverly, to be testing how high a price the market will stand.

.



No one disagrees that large dies raises costs. the only point of content is how much of Turing's large die is related to RTX.

The fact is only 8% of the die area is used for RTX, the rest of the increase is due to the 15% increase in DUA cores and the increased functionality that sees the CUDA cores often performing 50% faster than in Pascal. The large increases in L1 and L2 cache sizes alone would account for a large part of the die area increases.
 
No numbers to hand right now but a few years ago the total VRAm costs would be around $40 on a high end GPU, you are now looking at $120, twice that for HBM2.
So pretty much what I said then, most of the extra money they are charging is all profit. At least this round they can say R&D cost for RTX, lets see what they will say for the 3000 series if it is priced same or more.
 
Gibbo quoted some % for memory costs on a Graphics card and the figures work out much higher than mentioned in this thread.
 
Back
Top Bottom