• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The AMD RDNA 4 Rumour Mill ***

So lets get down to brass tacks here, what price are the RX 9060 and 9060 XT 8GB and 16GB variant landing at? I am assuming it'll be something like

£229-249 - RX 9060 8GB,
£299-319 - RX 9060 XT 8GB
£359-379 - RX 9060 XT 16GB

I know this will once again depend on the leather jacket wearing green goblin prices for the 5060/Ti/Super/Ultra/Mega or whatever they call it, but the 4060 is ~£250 and is a bag of literal poo in terms of value and performance, so the new ones can only be better, right? Right? :)
A £350 9060 XT 16 GB would be a great seller. Likely better than PS5 Pro performance, plenty of vram, raytracing & upscaling fixed, none of the huge driver issues of Intel (which made the B580 hard to recommend despite its seemingly great value), decent price so a lot of people can make a good gaming PC for ~£850. It would be like mana for the budget PC gaming market.
 
A £350 9060 XT 16 GB would be a great seller. Likely better than PS5 Pro performance, plenty of vram, raytracing & upscaling fixed, none of the huge driver issues of Intel (which made the B580 hard to recommend despite its seemingly great value), decent price so a lot of people can make a good gaming PC for ~£850. It would be like mana for the budget PC gaming market.
I would add 100 to his prices tbh.
 
Interesting video, the guy does good chip analysis content.


So the question was with AMD and Nvidia being on the same node, and with analogue die space being about the same, perhaps AMD's being very slightly larger how did AMD achieve 25% higher density?

After 25 minutes of analysis his conclusion was "i don't know" yeah thanks.... can i have 25 minutes of my life back please?
AMD have been designing Ryzen chips with a directive to make them as compact as possible in order to make it difficult for Intel to compete, they have been doing that for 8 years at this point and have got very good at it, Arrow Lake is on TSMC 3nm, one or two generation's 'depending on how you want to look at it' better than Zen 5, it consists of 2X 71mm^2 chiplets and 1X 122m^2 IO die on 6nm, even with the 6nm IO die it is a total size of 264mm^2.

Arrow Lake is 243mm^2, the TSMC 3nm node its using is 1.6X more dense, had it been made on the same TSMC node as Zen 5 it would be 389mm^2, that's a whopping 47% larger, all i can say is Nvidia are very much better at packaging design than Intel, just not as good as AMD.
 
Last edited:
So the question was with AMD and Nvidia being on the same node, and with analogue die space being about the same, perhaps AMD's being very slightly larger how did AMD achieve 25% higher density?

After 25 minutes of analysis his conclusion was "i don't know" yeah thanks.... can i have 25 minutes of my life back please?
AMD have been designing Ryzen chips with a directive to make them as compact as possible in order to make it difficult for Intel to compete, they have been doing that for 8 years at this point and have got very good at it, Arrow Lake is on TSMC 3nm, one or two generation's 'depending on how you want to look at it' better than Zen 5, it consists of 2X 71mm^2 chiplets and 1X 122m^2 IO die on 6nm, even with the 6nm IO die it is a total size of 264mm^2.

Arrow Lake is 243mm^2, the TSMC 3nm node its using is 1.6X more dense, had it been made on the same TSMC node as Zen 5 it would be 389mm^2, that's a whopping 47% larger, all i can say is Nvidia are very much better at packaging design than Intel, just not as good as AMD.
Like with any part of chip design there is trade offs, inc the density
 
Like with any part of chip design there is trade offs, inc the density

Usually if you have more analogue than digital in the chip it compromises that density, analogue was not able to shrink much at all since about 7nm, its one of the reasons AMD designed Zen as MCM specifically in a way where almost all of the analogue portions of the chip can be moved to its own chiplet separate to the logic, unlike Intel who either don't get it or don't have the technical know how to do that with their MCM design.
This was the same idea behind RDNA 3, if you look at it most of the analogue is in the 4 to 6 chiplets around the main chip, which is primarily logic.

So that at least could be explained, which is why High Yield analysed it, however as it turns out the Nvidia chip doesn't use any more analogue die space than the AMD chip, if anything AMD have slightly more analogue, High Yield was looking for a lot more analogue in the Nvidia chip to explain the 25% higher density of the AMD chip, its not there which is why he concluded "i don't know"
 
Last edited:
A £350 9060 XT 16 GB would be a great seller. Likely better than PS5 Pro performance, plenty of vram, raytracing & upscaling fixed, none of the huge driver issues of Intel (which made the B580 hard to recommend despite its seemingly great value), decent price so a lot of people can make a good gaming PC for ~£850. It would be like mana for the budget PC gaming market.

350 is far too expensive for a 9060, £300 should be the limit imo.

The 9060 still has same size bus as 7600 as 16gb I don't think they will gain much over the 8gb version
 
350 is far too expensive for a 9060, £300 should be the limit imo.

The 9060 still has same size bus as 7600 as 16gb I don't think they will gain much over the 8gb version
Still relying on GDDR6 with a 128bit bus I can see these 9060s being bandwidth starved, Nvidia at least has the advantage of GDDR7 which will help a lot on these cards with a small bus.
 
Still relying on GDDR6 with a 128bit bus I can see these 9060s being bandwidth starved, Nvidia at least has the advantage of GDDR7 which will help a lot on these cards with a small bus.

Yeah I agree, will wait to be proved wrong but can't see it competing too well unless the price is decent
 
Still relying on GDDR6 with a 128bit bus I can see these 9060s being bandwidth starved, Nvidia at least has the advantage of GDDR7 which will help a lot on these cards with a small bus.

They might be,but the RX9070XT has less memory bandwidth than an RTX5070 but is faster in most cases. The bigger issue is going to be the lack of shaders,especially if Navi 44 is half a Navi 48.
 
So the question was with AMD and Nvidia being on the same node, and with analogue die space being about the same, perhaps AMD's being very slightly larger how did AMD achieve 25% higher density?

After 25 minutes of analysis his conclusion was "i don't know" yeah thanks.... can i have 25 minutes of my life back please?
There are interviews on YouTube with AMD engineering team (months old now) where they explain how they worked closely with TSMC to cut off as much "fat" as possible aside transistors themselves - mostly the filler silicone. This way they also achieved better heat conductivity to the cooler and in case of Ryzen they were able to move 3D vcache underneath the CPU instead of on top of it etc. (previously cache was the thermal isolator overheating the core underneath, not anymore).
 
Last edited:
They might be,but the RX9070XT has less memory bandwidth than an RTX5070 but is faster in most cases. The bigger issue is going to be the lack of shaders,especially if Navi 44 is half a Navi 48.
Those have a 256 bit bus though which seems to give good enough bandwidth with GDDR6 to not limit the cards, going down to 128 is a large cut in bandwidth and definitely impacted the lower end cards last gen.
 
There are interviews on YouTube with AMD engineering team (months old now) where they explain how they worked closely with TSMC to cut off as much "fat" as possible aside transistors themselves - mostly the filler silicone. This way they also achieved better heat conductivity to the cooler and in case of Ryzen they were able to move 3D vcache underneath the CPU instead of on top of it etc. (previously cache was the thermal isolator overheating the core underneath, not anymore).

Thanks, that would be interesting viewing, i'll try to find it...
 
With this architecture could AMD have made a 5090 competitor?

The 5090 is 744mm^2.

Lets say the memory interface had to double to 512Bit, infinity cache is 50% larger, the rest of the memory analogue increases in size along with the rest of the GPU as its part of the CU's, display engine etc doesn't need to change.

Memory interface and infinity cache together are 99mm rounded up, subtract that from 364mm = 265mm. The Display Engine is about 10%, so about 27mm of the remaining GPU. I realise 10% of the remaining GPU is not the whole GPU but i'd rather go a bit over than try and guess what it actually is with the analogue taken out. as it is its 27mm of the remaining die.

Memory Interface X2 from 256Bit to 512Bit (46mm X2 = 92mm)
Infinity Cache + 50% (53mm + 50% = 80mm)
Remaining GPU minus about 10% for the display engine etc. (265mm - 10% = 239mm)

239mm X2 = 478mm + the new 512Bit Memory Interface 92mm = 570mm + the 50% larger Infinity Cache of 80mm = 650mm + the display engine = 677mm.

I've rounded everything up and used worst case scenario, the 5090 is 80% faster than the 9070 XT, what i have created here is 2X a 9070 XT or +100%, with a scaling of 0.8 a 128 CU 512Bit 32GB RX 9090 would match a 5090.

Could they technically do it? Yes

Why don't they? a chip like that would need to be selective in low power binning so it doesn't push the power consumption past 600 watts, very few would actually be viable, with that what do they do with the rest of the chips? Nvidia can just sell them to the Chinese who will be glad to buy anything from Nvidia for their own ambitions, AMD don't have that luxury.
Those that do make it to OCUK's stock would be very few in number and very expensive.
So i think while in a technical sense yes they could do it they couldn't do it in a practical sense.

Could they make a much smaller GPU that is perhaps somewhere around 450mm^2 to match a 4090? Yes, an RX 9080 XT is practical.
 
Last edited:
9060's should be 192 and 12Gb really. That would have really stuck it to team green as well.

I just looked up Navi 44 die size, its 153mm^2, i'm guessing it has around 40 CU's (2560 Shaders, that is a common number for mid range AMD GPU's) that should put it around an RX 7800 XT but with that memory interface probably actually around an RX 7700 XT.

If they just added two 32Bit Memory Controllers at 23mm^2 it would bring the 192Bit 12GB Navi 44 to something more in line with the RX 7800 XT at 176mm^2, they wouldn't even need to clamshell it to make a 16GB version and add £70 to the base price to pay for the complicated PCB and doubling of VRam chips.

It makes much more sense, surely to AMD too.... ???????
 
Last edited:
Looks like in AMD they didnt see the benefit of creating a monster chip with 600w power draw using same as last gen node. Which i consider a smart move, compared to nvidia who changed pretty much next to nothing from ADA to Blackwell on same node and what we can see - virtually no gen to gen performance uplift from 40 to 50 series. Im not considering fake frames as generational uplift.
 
Looks like in AMD they didnt see the benefit of creating a monster chip with 600w power draw using same as last gen node. Which i consider a smart move, compared to nvidia who changed pretty much next to nothing from ADA to Blackwell on same node and what we can see - virtually no gen to gen performance uplift from 40 to 50 series. Im not considering fake frames as generational uplift.

RTX 4070: 5888 Shaders, 201 Watts, GDDR6X
RTX 5070: 6144 Shaders, 229 Watts, GDDR7
RTX 4070 S: 7168 Shaders, 218 Watts, GDDR6X
RTX 4070 Ti 7680 Shaders, 284 Watts, GDDR6X

Performance none RT.

4070: 100%
4070 S: 116%
5070: 122%
4070 Ti: 126%

Performance RT.

4070: 100%
4070 S: 114%
5070: 115%
4070 Ti: 132%

Yes i think you're right, the 50 series is just a 40 series refresh with GDDR7

In-fact notice the 5070 falls a little short in RT compared with the 40 series?
 
Last edited:
RTX 4070: 5888 Shaders, 201 Watts, GDDR6X
RTX 5070: 6144 Shaders, 229 Watts, GDDR7
RTX 4070 S: 7168 Shaders, 218 Watts, GDDR6X
RTX 4070 Ti 7680 Shaders, 284 Watts, GDDR6X

Performance none RT.

4070: 100%
4070 S: 116%
5070: 122%
4070 Ti: 126%

Performance RT.

4070: 100%
4070 S: 114%
5070: 115%
4070 Ti: 132%

Yes i think you're right, the 50 series is just a 40 series refresh with GDDR7

In-fact notice the 5070 falls a little short in RT compared with the 40 series?
Never mind the 4070 being 200w, 4070 super 220w and 5070 250w. Once you lock the power then you will see real gen to gen improvement
 
Back
Top Bottom