• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Not much excitement for RDNA3, compared to the RTX 4000 series

sometimes i dont get why doesnt nvidia just leapfrog one generation to completely get rid of amd.. i have often been unable to grasp the strategy at play here - maybe introduce the next gen in 2023 instead of 2024, keep this for one more cycle till 2025 and amd's gone.. nvidia's roi would suffer in the short-term but then the leather jacket can charge anything once amd's gone.. perhaps its more complex than this, but the guy had cash to pay for ARM though

thankfully we have intel though, but those guys too are perhaps not operating at optimal levels given what i hear about planned layoffs, can they really sustainably fund the dGPU division - the industry's pretty interesting with complex supply chains and huge lead times, but still interesting how various scenarios could play out
 
Last edited:
The pricinng of both 4080's left amd with a clear opportunity with their rdna3 chips. If they dropped the price of the 4080 16gb to 1000 as well they would be in a much stronger position vs whatever prices amd launches rdna3.
 
The pricinng of both 4080's left amd with a clear opportunity with their rdna3 chips. If they dropped the price of the 4080 16gb to 1000 as well they would be in a much stronger position vs whatever prices amd launches rdna3.
I still wouldn't pay more than £650 for the 16gb model as its more of a 4070ti than a true 80 class card but I feel nvidia will now price drop it to $900 and people will lap up the supposed discount and forget it's still a 40% price increase over 3080 for 35% more performance.
 
Kinda is big deal when they are back tracking
And thought they could get away with it

3080 £650 to 16gb 4080 £1269. Is ridiculous as well
Friday the 14th of October 2022 is the back tracking day, first the budget after weeks of pretending all was well, then the RTX "4080" after weeks of pretending all was well.

Climbdown day!
 
sometimes i dont get why doesnt nvidia just leapfrog one generation to completely get rid of amd.. i have often been unable to grasp the strategy at play here - maybe introduce the next gen in 2023 instead of 2024, keep this for one more cycle till 2025 and amd's gone.. nvidia's roi would suffer in the short-term but then the leather jacket can charge anything once amd's gone.. perhaps its more complex than this, but the guy had cash to pay for ARM though

thankfully we have intel though, but those guys too are perhaps not operating at optimal levels given what i hear about planned layoffs, can they really sustainably fund the dGPU division - the industry's pretty interesting with complex supply chains and huge lead times, but still interesting how various scenarios could play out
Are you silly ? What on earth makes you think they're capable of jumping 1 generation ahead in their roadmap ? The 4090 is already using one of the most advanced nodes from the world #1 manufacturer, TSMC. They're pushing it to pretty much the absolute limits of a reasonable power envelope (well past the sweet spot for efficiency). No coincidence power consumption is similar to 3090ti & 6950XT - both pushed well out of the sweet spot to chase bench numbers. That's really as far as you can go on power draw for a single card. What makes you think they're secretly capable of jumping 2 years ahead ?
 
Last edited:
Are you silly ? What on earth makes you think they're capable of jumping 1 generation ahead in their roadmap ? The 4090 is already using one of the most advanced nodes from the world #1 manufacturer, TSMC. They're pushing it to pretty much the absolute limits of a reasonable power envelope (well past the sweet spot for efficiency). No coincidence power consumption is similar to 3090ti & 6950XT - both pushed well out of the sweet spot to chase bench numbers. That's really as far as you can go on power draw for a single card. What makes you think they're secretly capable of jumping 2 years ahead ?
ah no the telecom players are actually one node ahead (or will be in 1 year).. nvidia probably has 2 teams working on this stuff in tandem, i think they'd only need to ramp up manpower to stagger these releases (its going to be a lot of cost though and huge roi loss through cannibalization), but anyways thats not something that can be planned now.. if you'd be thinking of going this route then i would think you'd atleast need a 2-3 years lead time to build up the organization..

like nvidia could have easily done this during the ampere generation if they were prepared, a move to tsmc with ada in 2021 - but then again leather jacket should have started working on this from 2018, its not something that can be executed on the fly
edit: and yes the node alone cant explain increase in efficiency, there are architectural gains to account for as well
 
Last edited:
I dont think AMD will be far behind at all, maybe in RTX but other wise i think they will come out swinging. Nvidia has crippled the 4080 series, and in 6 months the 4080ti will be needed to counter AMD 7800XT 7900XT and 7950XT the 4090ti will also make an appearance in 8 months to a year. if they can power the thing with out melt down.
I hope they do a 970pro 980pro on nvidia lol.

Inferior raytracing performance
Inferior upscaling quality
Probably no dlss 3.0 alternative

Probably for like 150 quid less than nvidia .. I’d say that’s behind
 
It's even worse when you look at the die sizes. The GPU size of the AD104 is under 300MM2 so is closer to the GA106 rather than the GA104. It also has a 192 bit memory bus like an RTX3060.

So even calling it an RTX4070 is generous to Nvidia. It is more like an RTX4060 or RTX4060TI. Even if it was an RTX4070 it would need to be sub £500.
It doesn't make sense to compare them in this way.

4nm based AD104 has more transistors than 8nm based GA102, used for the RTX 3090 TI.

https://www.techpowerup.com/gpu-specs/nvidia-ad104.g1013
https://www.techpowerup.com/gpu-specs/nvidia-ga102.g930

There's a significant difference in transistor density - 121.4M / mm² vs 45.1M / mm².
 
Last edited:
It doesn't make sense to compare them in this way.

AD104 has many more transistors than 8nm based GA102, used for the RTX 3090 TI.
???

Every 106 series dGPU,was replaced with a new 106 series dGPU which more transistors. This is how you get the performance jump.

The problem is you have gotten used to Nvidia messing up the naming of things. In the past the midrange of the new generation,used the node shrinks to massively jack up transistor count on a new node.

Hence it was very common in the past for the mainstream dGPU to have performance close to the top end of the previous generation with a narrower memory bus. You rarely see that now because Nvidia just does things like this.

The reality is this so called AD104 is really a 106 series sized die at under 300MM2. It has a 192 bit memory bus like a 106 series dGPU too. Everything screams midrange dGPU.

Nvidia is just selling smaller and smaller dies for more and more money. They first started this nonsense with Kepler and the so called "high end" GTX680 which was a midrange dGPU sold as high end. There was even leaked pre-production pictures of the GTX680 being named the GTX670!! Its only with the GTX700 series rebrand that you actually something closer to normality.

But even the the 70 series dGPUs use to have the larger 500MM2 series dies not the tiny mainstream ones.

This is why the mainstream dGPUs are stagnating in performance because Nvidia keeps doing this all the time.
 
Last edited:
Inferior raytracing performance
Inferior upscaling quality
Probably no dlss 3.0 alternative

Probably for like 150 quid less than nvidia .. I’d say that’s behind


from what people have posted about DLS 3 it isn't that good. looks rubbish. inferior ray tracing maybe be true, but lets see on price, AIB prices of the 4090 could be £600 more if AMD come in at £1.100 as i said they might well surprise every one yet. including Nvidia.
 
Nvidia is just selling smaller and smaller dies for more and more money.
I'm curious about why you want bigger die sizes for GPUs. Greater transistor density (on more power efficient process technologies) is desirable from a design point of view, not die size.
 
Last edited:


I'm curious about why you want a bigger die for GPUs. Greater transistor density is desirable from a design point of view, not die size.

Because they are throwing less and less transistors at mainstream dGPUs relative to the high end. That means less performance compared to the high end. This is why you are having stagnation in the mainstream dGPUs from Nvidia and AMD. People have gotten used to Nvidia just making these mainstream dies which lack enough transistors so the performance goes down the drain. If you look at the TU106 to GA106 transition,it had 11% more transistors! Just look at the RTX3060(slightly cut down) vs RTX2060 Super(full GPU):

Only a 5% difference in performance. Even the full GA106 probably would have 10% extra performance over a full TU106. But then compare the TU106 and the GP106 - there was nearly a doubling of performance because there was a 2.45x increase in transistors. But Nvidia charged much more for those extra transistors but they made more margins too as the node got cheaper too.

So when you look at the AD104 it is the smallest 104 series dGPU since the Kepler GK104. That was when Nvidia rebranded its second tier die as a high end dGPU. It wasn't high end. Then you have that smaller memory bus. At best the AD104 should be a RTX4070/RTX4060TI,but it really sounds like an RTX4060.

But what Nvidia has increasingly done,is when they get peformant smaller dies,they decide to jack the price up. They never used to do this.

I give you an example of a very famous mainstream dGPU of the past,the 8800GT/8800GTS:

It was made on TSMC 55NM/65NM and had more transistors than the 8800GTX the year before(made on 90NM):

The 8800GT/8800GTS 512MB had 10% more transistors than the fastest and largest dGPU of the time. But for under half the price of the 8800GTX you were almost getting similar performance,because they used the smaller dies to pass the cost saving onto us,the consumer. Now they are doing this to massively increase margins - you can see how they are going up and up. The moment they started the Kepler strategy of pushing up increasingly smaller and smaller dies upwards,Nvidia's margins went upwards too.

It also leads to a situation where the anaemic mainstream dGPUs are causing people to spend more and more.

I seriously doubt it costs Nvidia 3X more to make an RTX4070 12GB than it does for a RTX3060 12GB.

So when people remembered the good old days of mainstream dGPUs,its because Nvidia wasn't try to tierise everything on performance and threw transistors at the problem. Now they are not and expect you to pay more and more to increase their margins.
 
Last edited:
from what people have posted about DLS 3 it isn't that good. looks rubbish. inferior ray tracing maybe be true, but lets see on price, AIB prices of the 4090 could be £600 more if AMD come in at £1.100 as i said they might well surprise every one yet. including Nvidia.
Similar to DLSS I'm pretty sure the DLSS frame generation and DLSS is going to keep improving.
AMD are a generation behind NVIDIA in raytracing performance. I doubt they're surprising anyone.

AMD need to price aggressively. They have the inferior product line and feature set. They should aim to match rasterisation for a big chunk less than NVIDIA are charging IMO,
 
A reputed leaker has said that the top RDNA GPU is not able to compete well with the 4090 in both raster and RT which makes sense as its ridiculously fast. I am seeing 2x performance vs. my 3080 Ti in many areas. Add DLSS 3 on top of that and AMD has its work cut out. AMD needs to make the mid range count as that's a segment NVIDIA seems happy to ignore to push their ASP.

Maybe NVIDIA have cancelled the 4080 12GB because they are concerned about the mid-range. Wouldn't be too surprising. That 4080 12GB was always a joke. Its clearly the 4070.
 
What surprises me is the fact the 3rd November announcement is so close and there is no performance leaks what so ever. Not a sausage.
 
Back
Top Bottom