• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
AFAIR, the AMD PR person said something more along the lines that due to competitve pressure they won't leave performance on the table, which to me implies that they will once again clock as high as possible in most SKUs. I'm sure they'll have one SKU where the > 50% perf/watt will be true but the rest will be clocked as high as possible.

That can mean anything and AMD probably saw NV pushing more power with their parts and decided to do the same. If you are targeting a 400W + TBP you can build a core that will use that power in the sane part of the v/f curve. Provided N31 was built to use 400+W it will do so and it can easily hit the perf/watt claims AMD have already made. The 6900XT did it before, as did the 5700XT so I don't think it will change this gen.

Lower tier parts have been worse from AMD like the 6700XT where that has the same TBP has the 5700XT but only 30% or so more performance because that was clocked to the moon. Same as the 6500XT.
 
AFAIR, the AMD PR person said something more along the lines that due to competitve pressure they won't leave performance on the table, which to me implies that they will once again clock as high as possible in most SKUs. I'm sure they'll have one SKU where the > 50% perf/watt will be true but the rest will be clocked as high as possible.
I thought the rumour was AMD won't go balls to the wall, and will leave Nvidia to take the power hungry crown. But if AIBs want to go all, out there is room and the option for them to do that.
 
The 2080Ti had 4352 cuda cores, the 3090Ti has 10496, that is..... 2.4X, its not 140% faster, according to TPU the 3090TI is 67% faster.

Because the Ampere cuda cores don't have the same shader performance as the Turing cuda cores, they increased the number of them so it amounts to more RT and compute cores but taking away some space from the shader portion of the core, its a rebalancing of the core to better suit the current direction of what an Nvidia GPU is.

AMD also need much greater RT performance, one way they can do that is the same way Nvidia did it, so that 2.4X cores probably isn't going to scale, just like Nvidia's didn't going from Turing to Ampere.

Okay but what NV did with Ampere was just double FP32 performance and called that 2.4x more cuda cores. Sure that is great if running FP32 code but if you have any INT32 in your code you effectively only have half the performance so INT32 was just 1.2x more execution units with 1.2x more clockspeed which totals 44% more INT32 performance.

So Ampere did and didn't have 2.4x the cores because it depended on workload.

RDNA3 seems to just be a flat 2.4x more INT and FP though so there is more room there for scaling to be closer to the theoretical max. With clockspeeds between 30% and 50% higher you are looking at a range of between 3.1x Tflops to 3.6x Tflops. Expected a meagre 50% performance from that level of uplift is just really pessimistic unless you are talking @300W where it would satisfy the +50% perf/watt claim.

If TBP is around 420W though I do expect 2x or so more performance from the 7950XT than the 6900XT which is going to put it at 4090 levels or ahead.
 
Ada aint it, 25 fps at 4k in cyberpunk.

Even nvidia can't do that though, the 4090 gets 22fps on cyberpunk without DLSS.
Yes, was going to say similar.
Nvidia's RT loss is better than AMD's but while TPU's RT performance chart is a bit harsh (and only uses Control) to me it shows they all fall of a cliff:
9bGA9xo.png
(1440P since this was from the Arc review.)

-51% is certainly better than -74% but I'd call both a cliff just not a sheer 90° drop (which I guess to stretch this metaphor would be a card without RT).

Call me when the RT tax is -15% or so!
 
Okay but what NV did with Ampere was just double FP32 performance and called that 2.4x more cuda cores. Sure that is great if running FP32 code but if you have any INT32 in your code you effectively only have half the performance so INT32 was just 1.2x more execution units with 1.2x more clockspeed which totals 44% more INT32 performance.

So Ampere did and didn't have 2.4x the cores because it depended on workload.

RDNA3 seems to just be a flat 2.4x more INT and FP though so there is more room there for scaling to be closer to the theoretical max. With clockspeeds between 30% and 50% higher you are looking at a range of between 3.1x Tflops to 3.6x Tflops. Expected a meagre 50% performance from that level of uplift is just really pessimistic unless you are talking @300W where it would satisfy the +50% perf/watt claim.

If TBP is around 420W though I do expect 2x or so more performance from the 7950XT than the 6900XT which is going to put it at 4090 levels or ahead.

No one wants to see AMD stick it to Nvidia more than me, what i want to see on the 3'rd of next month is AMD comparing the 7900XT to the 4090 in cyberpunk 2077 with RT cranked right up and boast about a 10% performance lead, if they do that i will go out and buy an expensive bottle of Champagne i can't afford and celebrate.

Because Nvidia really need to be taken down a few pegs, nothing would be better than to do it while playing their own game, literally.

But i want to temper expectations, i think one of the worst things we can do to AMD is high expectations, where matching or even beating Nvidia is expected, because if they don't that could depress what might still be a very good card launch in its own right.

We need to judge AMD's new card on its own merit, it can still be a better card, better for the market, even if its not faster.
 
Last edited:
Just delivering cards whose performance doesn't fall off a cliff when turning RT on would be nice.

I meant relative to Nvidia's own drop in framerates when turning on RT. Nvidia's is bad enough but AMD's is even worse..

Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.

This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.

We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU, that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold image degrading compressions techniques and being told "better than native".
 
Last edited:
Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.

This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.

We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold Q degrading compressions techniques and being told "better than native".

Its not satire, its marketing, its what they want you to think, what i find really sickening is some reviewers parrot it like they believe it themselves.
 
The problem with amds ray tracing comes when you have/had these 2 things:

- lack of FSR in said game (which was the case for 1-2 years), this made RT a complete no go for amd except the very top end rdna 2 gpus(s), even when only 1-2 RT effects were implemented (unless amd sponsored where said effects were dialled down even further i.e. resolution and where said effects where used)
- games with more complex/heavier RT effects where rdna 2 falls apart

CB chart is somewhat decent/fair in the sense, they aren't using titles like DL 2, cp 2077, chernobylite where RT effects are very heavy, only metro ee is what I would classify as being the main RT title here:

CQzc2Ms.png

And obviously there are many other newer and more titles coming out now.

Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.

This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.

We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU, that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold image degrading compressions techniques and being told "better than native".

Are people still arguing against this despite there being many in depth videos (including not just DF but also HU, TPU, gamers nexus) showing where dlss and even fsr can often produce better IQ in various areas over native + aa :confused: Of course there are plenty of areas where one could find some area of a game world and take screenshots and go "upscaling bad" but at the same time there are also plenty of areas where one can take screenshots or/and videos to show better and more detail being rendered or as is most often the case with upscaling tech. considerably less shimmering and better temporal stability, which for many can be far worse than a slight softness to the image.
 
Last edited:
Your post was very clear and accurate but applies to both Nvidia and AMD. I think you will agree that it is hardly a win to claim Nvidia are better than AMD, based purely on the fact that one drops to 30 FPS compared to one dropping to 20 FPS. Neither are playable if 30 FPS is an average.

This is the difference between crashing into a brick wall at 150 mile per hour with a helmet on, vs without a helmet on. Neither will save your life and neither is ideal.

We have now reached a stage where almost £2,000 gets you a so called "top tier" GPU, that has to use compressions techniques to get playalbe frames with RT on. In fact I can't tell if it is satire that we are being sold image degrading compressions techniques and being told "better than native".
first of all i think it was uber RT thats not yet been released playing at 4k.. and not at 1440p, the generally accepted res for RT. the consensus is that amd is going to be manhandled in the RT dept during this cycle especially in more reasonable RT games (CP77 overdrive isnt reasonable, its a tech demo). i dont know if people have read the ada whitepaper but nvidia has found a nifty recursive hardware acceleration framework for RT and then there's this opacity mapping block which can do RT alpha testing in hardware. all you need is a game with lots of transparent items like foliage and amd's RT wil start looking like a joke

regarding dlss, all those who feel its marketing shouldnt be complaining abt 700w cards in 2024.. amd isnt going to give you a magical 350w card that can compete with nvidia
 
Last edited:
If the 7900 & 7950 XT has the chops AMD are going to want to charge you for it, that means most of their customer base is eying up the 4090 right now.

We will know by Wednesday imho, if there is still silence there's you answer they are just going to slot it in with Nvidia's overpriced stack at the high end. It may well be of course they are targeting the lower tiers which commercially makes a good deal of sense I guess.
 
If the 7900 & 7950 XT has the chops AMD are going to want to charge you for it, that means most of their customer base is eying up the 4090 right now.

We will know by Wednesday imho, if there is still silence there's you answer they are just going to slot it in with Nvidia's overpriced stack at the high end. It may well be of course they are targeting the lower tiers which commercially makes a good deal of sense I guess.

Intel put Rptl up on pre-order the day after AMD launched Zen 4......

That's someone AMD could learn from, just pee on your competitors parade, its your job to do that.
 
But i want to temper expectations, i think one of the worst things we can do to AMD is high expectations, where matching or even beating Nvidia is expected, because if they don't that could depress what might still be a very good card launch in its own right.

My expectation depends on TBP.

If it is 420W then the maths works to 2.1x the 6900XT in performance unless AMD have changed how they do their perf/watt comparisons but I doubt it. If TBP is lower then performance will be lower.

I don't see the point in being overly cautious but also wouldn't want to be overly optimistic either.

I did the same calcs with 5700XT and it was bang on the money even though plenty of people then thought N21 would compete with the 2080Ti and wouldn't touch the 3080.
 
If the 7900 & 7950 XT has the chops AMD are going to want to charge you for it, that means most of their customer base is eying up the 4090 right now.

We will know by Wednesday imho, if there is still silence there's you answer they are just going to slot it in with Nvidia's overpriced stack at the high end. It may well be of course they are targeting the lower tiers which commercially makes a good deal of sense I guess.
Nvidia priced their cards high to sell 3000 series cards. If AMD fancies helping Nvidia sell old stock by all means, they should price their cards high as well.
 
Are people still arguing against this despite there being many in depth videos (including not just DF but also HU, TPU, gamers nexus) showing where dlss and even fsr can often produce better IQ in various areas over native + aa :confused: Of course there are plenty of areas where one could find some area of a game world and take screenshots and go "upscaling bad" but at the same time there are also plenty of areas where one can take screenshots or/and videos to show better and more detail being rendered or as is most often the case with upscaling tech. considerably less shimmering and better temporal stability, which for many can be far worse than a slight softness to the image.

You are falling for the marketing nonsense where the only form of AA is TAA or TAA+DLSS. It is easy to make TAA look better than native because it literally smears the screen to "soften" edges.

Prior to TAA you had Superampled Anti Aliasing (SSAA or FSAA), or Multisample Anti Aliasing (MSAA). Both of which give a superior IQ to TAA.

So taking a TAA image as a baseline and "improving" it, does not = "better than native", because native with TAA is already a lot worse than native with MSAA for example.
 
Status
Not open for further replies.
Back
Top Bottom