• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
If AMD are going for chiplets in presumably their higher end GPU's will that allow them to plop a thicc 3D cache on it and would it ever be possible to either get rid of GDDR or somehow put the GDDR alongside the chiplets?
 
If AMD are going for chiplets in presumably their higher end GPU's will that allow them to plop a thicc 3D cache on it and would it ever be possible to either get rid of GDDR or somehow put the GDDR alongside the chiplets?
Isn't this what HBM is? It's more or less dead in non-datacentre applications. Chiplets allow to put away stuff that doesn't need a cutting edge process like I/O. This increases the yield of the expensive large and performance defining part by making it smaller overall.
 
HBM is stacked layers of DDR as I understand it and the increase in density for a given area facilitates the higher bandwidth…. And cost.

It’s fair point though to consider whether it could be integrated into a single chip area.

If you think about it, if you chipletted the memory and increased the overall gpu area to incorporate it, you may realise faster transfers.

Imagine a threadripper sized gpu for the gpu.
 
If AMD are going for chiplets in presumably their higher end GPU's will that allow them to plop a thicc 3D cache on it and would it ever be possible to either get rid of GDDR or somehow put the GDDR alongside the chiplets?

AFAIK these "chiplets" are just moving some cache memory off the main chip and beefing it up.
 
Isn't this what HBM is? It's more or less dead in non-datacentre applications. Chiplets allow to put away stuff that doesn't need a cutting edge process like I/O. This increases the yield of the expensive large and performance defining part by making it smaller overall.

That's got me too. AMD says it's using MCM chipsets, when you look at the diagram it's memory chips sitting on the die

But guess what, that's exactly what HBM is, so by AMDs definition of this, AMD was already making chiplet MCM GPUs years ago with the Fury
 
That's got me too. AMD says it's using MCM chipsets, when you look at the diagram it's memory chips sitting on the die

But guess what, that's exactly what HBM is, so by AMDs definition of this, AMD was already making chiplet MCM GPUs years ago with the Fury

If the diagrams are to be believed this is slightly different, albeit not the MCM people get excited by, as they are doing it to move some higher level cache out of the main chip and in doing so both free up space for other stuff and beef up those caches.
 
If the diagrams are to be believed this is slightly different, albeit not the MCM people get excited by, as they are doing it to move some higher level cache out of the main chip and in doing so both free up space for other stuff and beef up those caches.
If you're talking about the diagrams from the reveal there are no details about any 3d stacking on the GPU side. A large cache is way less useful in a GPU than in a CPU so I personally don't think they'll go to the trouble. Splitting the I/O is a good balance between investment in complexity and benefits from the smaller and simpler main die.
 
That's got me too. AMD says it's using MCM chipsets, when you look at the diagram it's memory chips sitting on the die

But guess what, that's exactly what HBM is, so by AMDs definition of this, AMD was already making chiplet MCM GPUs years ago with the Fury

Not quite, that's like saying any GPU with a frame buffer is MCM.

The difference here is the local cache is off die, like Zen 3D but 2D stacked instead of 3D stacked.
 
AMD says because Nvidia is pushing higher and higher power limits, they have to follow or get left behind. In a world where it's becoming harder to get more performance, yet the demand for performance is growing, something has to give and that's power. AMD says it has better efficiency than Nvidia but the extra efficiency of RDNA3 is not enough to avoid increasing power draw to keep up or even beat Nvidia performance, however Nvidia has to push power more than us.




My take on deciphering what AMD says here: RDNA3 GPUs will consume more power than RDNA2 GPUs, but they will consume less than RTX4000 GPUs. No actual wattages were given so we have a pretty big range to guess with - for example the RTX4090 is rumoured to now be 450w, so that would mean the 7900xt can be anywhere between 310w and 440w
 
Last edited:
AMD says because Nvidia is pushing higher and higher power limits, they have to follow or get left behind. In a world where it's becoming harder to get more performance, yet the demand for performance is growing, something has to give and that's power


Yeah, i mean pre Zen and RDNA AMD was brute forcing their CPU's / GPU's to keep up with competition, and failing.

It seems now that Nvidia pushing their silicon harder, and Intel with their CPU's, but they are NOT failing. They just want those 10 FPS that puts them at the top of the bar graphs.
So AMD will have to do the same, Zen 4 will go to 170 watts, up from 105 watts Zen 3 and RDNA3 will go up from 300 watts, who knows what to but there is talk of 450 to 600 watts for Nvidia's RTX 4090 so i wouldn't be surprised going forward that any high end GPU will now be at least 400 watts, because no one wants to risk being left behind by the competition.

Isn't that great when all our energy costs are sky-rocketing? its going to cost 50p an hour to run these things.
 
Yeah, i mean pre Zen and RDNA AMD was brute forcing their CPU's / GPU's to keep up with competition, and failing.

It seems now that Nvidia pushing their silicon harder, and Intel with their CPU's, but they are NOT failing. They just want those 10 FPS that puts them at the top of the bar graphs.
So AMD will have to do the same, Zen 4 will go to 170 watts, up from 105 watts Zen 3 and RDNA3 will go up from 300 watts, who knows what to but there is talk of 450 to 600 watts for Nvidia's RTX 4090 so i wouldn't be surprised going forward that any high end GPU will now be at least 400 watts, because no one wants to risk being left behind by the competition.

Isn't that great when all our energy costs are sky-rocketing? its going to cost 50p an hour to run these things.


Good thing consumers have choices, AmD will still sell 65w CPUs and 200w GPUs so you're more than welcome to buy that rather than the 170w CPU and 450w GPU
 
AMD's Senior Vice President of Engineering at Radeon Technologies Group, David Wang, has confirmed some new details about RX7000 RDNA3 GPUs


* Still using a hybrid core design that can do Rasterisation and Ray Tracing, no FFU.

* Ray Tracing performance is improved over RDNA2

* Faster clock speeds than RDNA2

* RDNA3 supports AV1 codecs

* RDNA3 GPUs have DisplayPort 2.0 ports
 
The most disappointing news possible. Another generation where they are decidedly outmatched. GG no re


Bdet6uy.gif
 
The most disappointing news possible. Another generation where they are decidedly outmatched. GG no re


Bdet6uy.gif


I think people are in for a shock with the 7000 amd series and 4000 nvidia series ;). The hype bubble is going to go pop soon and reality will sink in when they are out at a all new msrp and new power requirements for the usual speed increase if we are lucky, anything more than the usual will have a very unusual high msrp even more than we saw on 6000 and 3000 series from amd and nvidia, so get your wallets ready and ready to buy a new psu too for the luxury of the usual speed bump we get, anyone that believes the fake rumours of 2x + is in for a shock also the new specs for 4070 as an example is now saying 20% speed increase from 3070... so that tells you everything you need to know if true of course. Amd and nvidia are going to sell us space heaters next gen for a speed bump we normally get but at much higher power use and heat.:cry: ;)
 
Could be the ideal storm to follow the perfect storm that was last year.

Nobody has as much money as everything is inflated. Electric costs mean running a 600w GPU on marathon gaming sessions will cost you more than just your pocket money. Initially the brands will overcharge at launch for the blind herd to rush in but sales swiftly die as the novelty wears off.

Wait for the retailers to get desperate and dust forming on the BNIB units. See if the 7800 or 4080 are as powerful as the hype said, then pick one up 6+ months later on less than msrp deals ready for the next mining wave!!
 
If Nvidia/AMD had no competition, voltages would be lower, clockspeeds would be lower, and power requirements would be lower. Everyone would be happy.

Now we finally have exciting and aggressive competition, some people are moaning about the increased (and inevitable) power requirements because of this. All you have to do is lower the voltage and clock speed to what they would have been anyway if they didn't have competition.
 
The most disappointing news possible. Another generation where they are decidedly outmatched. GG no re


Bdet6uy.gif
Humour me for a second, do you need dlss on more powerful GPUs? You get best image quality and more performance with more powerful GPUs or have people forgotten about that?

I don't look too get more powerful hardware to compromise
 
Status
Not open for further replies.
Back
Top Bottom