• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia "unlaunch" 4060ti 12GB erm 4080 12GB....

i always felt that the 4080 12g could only have had a very limited impact, perhaps OEM channels where you'd have corporate/govt or other ignorant buyers buying on behalf of other people..OEMs also tend to obfuscate things, because they're selling the whole package
i don't see how an individual buyer was ever going to fall in this trap, anyone buying a $1k card is going to be well informed.
a typical scenario would be an IT dept buying 4080 equipped dell computers for other employees.. but otherwise it didnt look like a big deal worth the level of exaggeration we have seen
now people will have only 2 choices in the 40 series for a long time and none of those are under $1k
 
Last edited:
I had a look at relative die sizes of the new Nvidia dGPUs,and I think the AD104 naming is throwing people off. Its no longer the second dGPU in the line-up,but now the third....just like a 106 series used to be! The AD103 now is the second in line. During the Ampere generation the GA104 was where the AD103 was located in the line-up! This somewhat confirmed by the relative die area compared to the top dGPU and the memory bus too.

4NM/5NM

The AD104 is 48.5% the area of the AD102(192 bit bus). AD103(RTX4080 16GB and 256 bit bus) is 62.3% of the area of the AD102.

8NM:

The GA106 was 44% the area of the GA102(192 bit bus). The GA104(256 bit bus) was 62.4% of the area of the GA102.

14NM/12NM

The TU106 was 59% of the area of the TU102(192 bit bus). The TU104(256 bit bus) was 72.3% of the area of the TU102.

The GP106 was 42.45% the area of the GP102(192 bit bus). The GP104(256 bit bus) was 66.67% of the area of GP102.

28NM:

The GM206(128 bit bus) was 38% the area of the GM200. The GM204(256 bit bus) was 66% of the area of the GM200.

The GK106 was 39% of the area if the GK110(192 bit bus). The GK104(256 bit bus) was 52.4% the area of the GK110.

45NM:

The GF106(128 bit bus) was 45.4% the area of the GF100/GF110. The GF104(256 bit bus) was 63.8% the area of the GF100/GF110.

The AD104 is the smallest 104 series dGPU relative to the 102 series dGPU in the last 12 years. The 104 series has now shifted to the third position where the 106 used to be,because of the 103 series.
 
it doesnt look like the right benchmark because we have seen unprecedente dincrease in transistor density this generation, the 4090 or ad102 is probably a new product segment.. and it looks like a peek into the future in how things are going to be.. i believe everyone estimated the 4090 at 55b transistor tops before nvidia actually spilled the beans..
 
Last edited:
it doesnt look like the right benchmark because we have seen unprecedente dincrease in transistor density this generation, the 4090 or ad102 is probably a new product segment.. and it looks like a peek into the future in how things are going to be.. i believe everyone estimated the 4090 at 55b transistor tops before nvidia actually spilled the beans..

It's about relative die size and transistor count compared to the top end.

For over a decade its been:

102/100>104>106>107


Now its:

102/100>103>104>106

The top end dies have remained quite consistently between 550MM2 to 650MM2 in area. So any jumps in transistor density don't really affect relative performance within the stack. But if the lower end dGPUs have smaller and smaller dies,they will have a lower and lower percentage of the top model's performance. This is why midrange/mainstream dGPUs are stagnating in performance,but not the top end. Just look at the relative performance jump of the top end vs mainstream. The mainstream is seeing less improvements.

Now look at the area relative to the top die,the AD104 fits into the same percentage range as the previous 106 series dies,ie, 40% to 50% of the top die. The 104 series were typically 2/3 the size of the top die,which surprise,surprise the AD103 is.

In terms of transistors the GA106 had 42.4% of the transistors of the GA102. GA104 had 61.5% of the transistors of the GA102.The AD104 has 46.96% of the transistors of the AD102. The AD103 has 60.2% of the transistors of the AD102.The GP104(GTX1080) had 61.02% of the transistors of the GP102(GTX1080TI). So even looking at transistors,the AD104(RTX4080 12GB) does not look like enthusiast level die - it looks more like a mainstream die. If you look at shaders,the AD104 only has 41.67% of the shaders of the AD102.

Plus people are getting bamboozled by the jump - remember there is a much better node too. But what you are getting in the AD104 is less of a GPU relative to the AD102,than the GA104 was relative to the GA102. The mainstream dGPUs are going to be powered by what would be considered a 107 series sized AD106.

It's almost like they are trying to make the mainstream dGPUs relatively less performant so they can see more high end sales. Sure,Nvidia needs to ditch its Ampere stock,but why bother rebranding a GA106 successor as a AD104? Because they intend to push the real product stack up again. They did it sucessfully with Kepler. A 104 series dGPU used to power a 60 series card(GTX460TI and GTX560TI). But a relatively less performant 106 series die was used for years.

It is only because RDNA1 and RDNA2 performed well,than it forced Nvidia to use a larger 104 series die for the RTX2060 Super and RTX3060TI. We actually had cards with more than 6GB of VRAM and larger memory buses. My concern is RDNA3 might not be performing as well as expected so Nvidia feels it can do this! :(
 
Last edited:
They also need to reprice the 4080 16gb.

The 4090 is barely more expensive and 2x the power almost.

Who in their right mind would choose the 4080?

This gen is really weird. 4090 aside everything else makes no sense.
What on earth are you talking about? the 4080 16gb fe is £1269, the 4090 fe is £1700 a difference of £430. But you cannot buy FE and the only 4090's available to buy today are over £2000 (a difference of £730)
 
I didnt know whats more surprising yesterday Nvidia backtracking or the chancellor getting sacked.
With the chancellor it was more a case of what took them so long. Nvidia backtracking on the other hand... Would expect hell to freeze over first!
it doesnt look like the right benchmark because we have seen unprecedente dincrease in transistor density this generation, the 4090 or ad102 is probably a new product segment.. and it looks like a peek into the future in how things are going to be.. i believe everyone estimated the 4090 at 55b transistor tops before nvidia actually spilled the beans..
It's only unprecedented because Samsung's 8nm was rather poor, if they'd have been willing to pay for TSMC's 7nm last gen the generational differences wouldn't have been so much.
 
They also need to reprice the 4080 16gb.

The 4090 is barely more expensive and 2x the power almost.

Who in their right mind would choose the 4080?

This gen is really weird. 4090 aside everything else makes no sense.
Not sure 4090 with a price tag of £1699 (FE) and typically £2k for AIB makes much sense either.
 
Guys, you also forgetting that bench test of 4090 had the newer driver which has optimisation built in.

The recent Nvidia driver has pushed that optimisation to older cards ie that generational increase might not be 60% in rasterisation now. Anyway all the reviewers have stopped doing benches certainly against Rtx 30 so we will never be able to find out now
 
Last edited:
Guys, you also forgetting that bench test of 4090 had the newer driver which has optimisation built in.

The recent Nvidia driver has pushed that optimisation to older cards ie that generational increase might not be 60% in rasterisation now. Anyway all the reviewers have stopped doing benches certainly against Rtx 30 so we will never be able to find out now


If thats true, credit ti NVIDIA. They didn't need to improve the 3XXX series but they have.

Would be interesting to see up to date benchmarks. Maybe I can save some money and get a 3090/Ti instead.
 
zyKrNPG.jpg


:D
 
It is obvious what they are doing.
Widening the gap at the top end so they have more space to milk the architecture over the next two years, and in doing so move more consumers into a higher price bracket. We're going to see every Ti, Super, Ultra, varied memory SKU they can possibly think of this gen. Get ready for the Ultra Super TI+ version.

They just made the mistake of calling this a 4080. They should have just called it a 4070, although they probably would have been laughed out of town with the price they're asking.
 
Last edited:
Widening the gap at the top end so they have more space to milk the architecture over the next two years, and in doing so move more consumers into a higher price bracket. We're going to see every Ti, Super, Ultra, varied memory SKU they can possibly think of this gen. Get ready for the Ultra Super TI+ version.

They just made the mistake of calling this a 4080. They should have just called it a 4070, although they probably would have been laughed out of town with the price they're asking.

It would be taking the mickey even if it was an RTX4070 because they would probably charge even more for it than an RTX3070! It would be lower down the stack(in specs) relative to the RTX4090,than the RTX3070 was to the RTX3090! Tiny sub 300mm2 die,less than half the shaders of the RTX4090,small memory bus,etc. GA104(RTX3070/RTX3070TI) had 61.5% of the transistors of the GA102(RTX3090/RTX3090TI).The AD104(RTX4080 12GB) has 46.96% of the transistors of the AD102.
 
Last edited:
He always does this, no matter who has done what it's always AMD do or have done this too, its automatic with him, sometimes he cited what or when, sometimes he doesn't because its just part of his mindset to make sure to keep reminding you that AMD are basically Nvidia / Intel under another name.

As he should. No brand is your brand.
 
Last edited:
It might be the RX5700XT? I remember there were pictures floating about of it being an RX680? I even mentioned it was really a Polaris successor sold at a Turing price. This is why I am concerned if Nvidia gets away with this,AMD might also join in! :(
 
Last edited:
They also need to reprice the 4080 16gb.

The 4090 is barely more expensive and 2x the power almost.

Who in their right mind would choose the 4080?

This gen is really weird. 4090 aside everything else makes no sense.
This was definitely my major problem with this gen. 4090 looks a great card but I don't need it. I'd have got 4080 or equivalent from AMD (waiting to compare) as it's naturally the card just below but the price and quality of it compared to previous gens is very poor. A lot less performance and higher price in comparison to if you was looking at the 3080 > 3090 differences. So 4080 16gb is poop in that performance to price ratio, 12gb was just a game of mis-selling and was so bad they had to remove it. Can't imagine paying £900 for a 4060ti either. 4090 is the only decent card they had and that was a price bump and overkill if you don't purely play PC. Between playing my console exclusives and playing Fate Grand Order which I've sunk a fair bit of change into then I'm just always divided on my time and attention anyway. I just literally can't justify £1600 when I don't play only on the PC. God save us AMD. You are now God to this dark devil that corrupts our hobby and tempts the weak willed.
 
Last edited:
You know, I have just thought that this opens up a gap below the 4080 but above the old 4080 12GB. Make it a 4080 LE (or something like that) for $1000 that's 10% faster than the old 4080 12GB and based on a cut down AD103 and will still sell like hotcakes.
 
This was definitely my major problem with this gen. 4090 looks a great card but I don't need it. I'd have got 4080 or equivalent from AMD (waiting to compare) as it's naturally the card just below but the price and quality of it compared to previous gens is very poor. A lot less performance and higher price in comparison to if you was looking at the 3080 > 3090 differences. So 4080 16gb is poop in that performance to price ratio, 12gb was just a game of mis-selling and was so bad they had to remove it. Can't imagine paying £900 for a 4060ti either. 4090 is the only decent card they had and that was a price bump and overkill if you don't purely play PC. Between playing my console exclusives and playing Fate Grand Order which I've sunk a fair bit of change into then I'm just always divided on my time and attention anyway. I just literally can't justify £1600 when I don't play only on the PC. God save us AMD.


You are now God to this dark devil that corrupts our hobby and tempts the weak willed.

And God only knows they are the weakest of the weak-willed...

:D
 
Steve says AMD has done the same thing before, which GPU was that m?
RX 560, later versions had less shader cores then the launch model. Product name was identical was almost impossible to tell them apart unless you were super clued in on the RX560 original specs.
 
Last edited:
it doesnt look like the right benchmark because we have seen unprecedente dincrease in transistor density this generation, the 4090 or ad102 is probably a new product segment.. and it looks like a peek into the future in how things are going to be.. i believe everyone estimated the 4090 at 55b transistor tops before nvidia actually spilled the beans..

Back in the old days this card would have been the 4080 not a 4090 as 90 class were dual gpus on one card.

Don't fall for Nvidias blaa blaa marketing, they deliberately mislead people with the naming, because now they don't want to give you an 80 class on the true large full die, 3080 only happened again on the full 102 die because AMD had something, otherwise the 3080 would have been as with the 4080 now a GA103, they even had GA103 that became laptop 3080tis with 16GB.. tells you enough right there.

These cards should not be called 90 class as always before 90 was a dual gpu card. Just AMD screwed their plans on the 30 series. Hope they do it again this time but I feel Nvidia knows something about AMDS cards this time again as they did with 30 series. 4080s are a disgrace this time too, one is a 4070 (16GB) and other really is a 4060 (12GB).
 
Last edited:
Back
Top Bottom