• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The AMD RDNA 4 Rumour Mill ***

You are wrong still, even apple uses tmsc among other companies so yes it's relevant because it's paints a picture where the prescalped prices only exist on gpus. I've explained already the chain is the same yet other companies have even achieved reduction in prices.

How can you keep missing the point? AMD and Nvidia, as well as Apple have an allocation of wafers at TSMC assigned months in advance. What Apple do with their TSMCs wafers has literally zero impact on Nvidia or AMD. So changing limited allocation from CPUs and APUs to GPUs is not just a simple switch. AMD and Nvidia are not going to switch allocation from more lucrative products to increase GPU production.
 
The more fabs the better, should help reduce wafer prices as capacity increases. This wont reduce GPU prices, sales will need to plumet for that to happen.
If demand were to plummet then supply would likely be reduced (noting relatively few players in the advanced fabrication node space) - akin to the way that OPEC restricts production to keep the price up.

Given the profitability of smaller dies, e.g. AMD's CCDs, and noting that large dies suffer from both fewer on a wafer and higher likelihood of defect (assuming a constant defect/mm²), large perfect dies will always be more expensive than pro-rata by area pricing when compared to small dies.

What would likely be beneficial to the consumer in terms of GPU design would be for chiplet based GPUs to become widespread, as chiplet based AMD CPUs already have.
 
Wafers are finite for everyone yet all other parts are not following the same cost increase pattern.

You can't seem to explain the disparity here.

AMD and Nvidia aren't the only ones that use silicon wafers at tmsc and other manufacturing plants.
Maybe the other parts don't have the same demand? I can't see people clamouring to clear the shelves of MacBooks honestly
 
/snip images

:confused:

I would say the 4080/90 were the tipping point in time where it became worth it.
I would say the tipping point would be about 4070Ti levels of RT raster.

My point was the 4070ti was barely better than my 3090 so not quite justifying any switch unless it was a 4080 or better. Which seem to comfortably or just about run RT games at acceptable frame rates. As you can see in your chart @humbug the 4080 is over 60 fps so kind of backs up my point above. :)
 
:confused:




My point was the 4070ti was barely better than my 3090 so not quite justifying any switch unless it was a 4080 or better. Which seem to comfortably or just about run RT games at acceptable frame rates. As you can see in your chart @humbug the 4080 is over 60 fps so kind of backs up my point above. :)

Its also 20% ahead of the 3090, in Cyberpunk.
 
To be honest If i was NVidia/AMD/apple, I wouldn't trust intell fabs with my designs.

This, why would anyone trust Intel to make their products for them when Intel also make the same products of their own that compete directly with them?

Even Intel don't use Intel fabs for their most competitive products.

Not going to happen, not a chance, not unless Intel fabs are no longer Intel fabs, TSMC have announced they are investing another $100 Million in US fabs, this on top of the $65 Million already invested, TSMC's CEO named Apple, Nvidia and AMD as signed up customers for those fabs, so they will be made in the US.

Frankly Intel fabs are becoming obsolete.
 
Last edited:
The 3090? i think the 3090 was and still is a good GPU but too expensive for what you got, its no better today in raster than the sub £500 card i have now.

I am referring to the point in time where I think RT was playable post #11489. I said the 4080 or stronger cards are this. My 3090 was almost there, but the 4080 is around 30% faster than it.
 
I am referring to the point in time where I think RT was playable post #11489. I said the 4080 or stronger cards are this. My 3090 was almost there, but the 4080 is around 30% faster than it.

I think Cyberpunk RT has got more difficult to run over the years because Nvidia had new cards they wanted you to upgrade to.

Now we have the EU5 Path Tracing malachi....
 
Last edited:
I think Cyberpunk RT has got more difficult to run over the years because Nvidia had new cards they wanted you to upgrade to.

@CAT-THE-FIFTH said same in another thread. The RT experts never mentioned this in the dedicated thread though...

Turing and Ampere flagships pretended, and the lower sku's have always lacked the grunt tbh.
 
Are you referring to the phsycho mode or path tracing?


ed- ok it looks like this (taken from nvidia site):
GeForce RTX 40 Series users can benefit from all the performance and image quality enhancements that DLSS 3.5 has to offer - including performance-multiplying Frame Generation for the fastest frame rates - in the demanding Ray Tracing: Overdrive Mode.
 
Last edited:
My point was the 4070ti was barely better than my 3090 so not quite justifying any switch unless it was a 4080 or better. Which seem to comfortably or just about run RT games at acceptable frame rates. As you can see in your chart @humbug the 4080 is over 60 fps so kind of backs up my point above. :)

This is from a purely price perspective and the 4070 Ti dropped the entry price for that level of RT from $1400 to $800, the 9070 XT now delivers it at $600 and the 5070 for $550 (though the 5070 lacks in raster and VRAM). I know these are pre scalped prices but the prices are still getting lower all said and done.
 
Last edited:
Back
Top Bottom