• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
LOL Enermax just trolled the internet.

Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.
 
LOL Enermax just trolled the internet.

Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.

I wouldn't say "trolled", it could be very possible as obviously amd and nvidia have to provide power supply makers with info. on what their new gpus will require (which nvidia have done)

I also wonder if we might see something similar to ryzen 7xxx where if allowed, they'll use a **** ton of power but there might be an "eco" setting to enable for the best efficiency.
 
Last edited:
I doubt AMD, its partners and all the social media influencers feel the same way :D

Look at how many clicks the videos regarding the 4090 power requirements have gotten.
 
So similar to the 3090 vs 6900xt situation.

What card are you running again? :D ;)

Hahaha. :cry: Very funny, if we’d had non-AIB cards available I’d have ended up with a 6900XT but they didn’t bother selling direct to us last time around, hence I bought the 3090FE as it only cost me £100 more at the time and it had better RT, DLSS and 24Gb VRAM. Just got a new build and needed a GPU, so my hand was forced, though I was still reluctant to pay what I did. If I hadn’t managed to grab a FE though I wouldn’t have bothered at all in the end.

I waited for AMD and they disappointed. Really hoping they don’t do it again this time, hence my earlier post saying I wish they’d at least leak some solid info so that we have an idea of what to expect.

I’m just fortunate that I don’t need to upgrade at present so absolutely no rush for me.
 
LOL Enermax just trolled the internet.

Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.

I think all the efficiency talk has or was talken out of context. In a interview with Tom Hardware Sam Naffziger of AMD said

“Performance is king,” said Naffziger, “but even if our designs are more power-efficient, that doesn’t mean you don’t push power levels up if the competition is doing the same thing. It’s just that they’ll have to push them a lot higher than we will.”

 
Well said @HRL especially the first sentence ;)

I think DLSS 3/frame generation will also throw a spanner in the works for amd just as it did with them having no dlss competitor for nearly 2 years (only counting dlss >2 and fsr 2 as v1 was ****), believe or not, people don't want to be waiting months, let alone years for tech. like that and some will be quite happy to pay a small premium if it means getting it "now". FSR 2.1 has nearly caught up with dlss now after having tested it for myself in spiderman over the weekend but still doesn't beat dlss on temporal stability, shimmering and "overall" reconstruction especially when looking at the lower presets i.e. balanced and performance modes

People will say "fake frames" when it comes to frame generation/dlss 3 but at the end of the day, how do you measure performance? FPS and frame latency? Which is what DLSS 3/fg improves.... Only con of dlss 3/fg in "normal" gameplay will possibly be latency but this will come entirely down to the type of game, k+m vs controller use. In terms of artifacts, doesn't look to be any noticeable issues outside of slowing the footage down entirely and capturing the "fake frame", if what DF/nvidia have said where nvidia have been working on this for the past 6-7 years and with amds lack of focus on ML/AI so far, how long are we going to be waiting for them to deliver this feature?

AMD was 2 years late to the DLSS party but honestly as an Nvidia user I really didn't notice. DLSS 2.0 adoption was painfully slow. On the other hand FSR adoption is much faster.

The thing is DLSS3 is frame interpolation which is nothing new and others have done it successfully in the past. I expect AMD to have an answer to that very fast. AMD probably thought PC gamers would not stand for cheap tricks like frame interpolation but seeing Nvidia doing and marketing it as something "amazing" will force them to implement it quickly.

Say in a years time when I'm looking to upgrade if perfromance is similar between Nvidia and AMD and FSR has implemented frame generation I will go for the AMD GPU as FSR adoption is outpacing DLSS quite noticeably and no one wants an Nvidia monopoly.
 
Last edited:
AMD was 2 years late to the DLSS party but honestly as an Nvidia user I really didn't notice. DLSS 2.0 adoption was painfully slow. On the other hand FSR adoption is much faster.

The thing is DLSS3 is frame interpolation which is nothing new and others have done it successfully in the past. I expect AMD to have an answer to that very fast. AMD probably thought PC gamers would not stand for cheap tricks like frame interpolation but seeing Nvidia doing and marketing it as something "amazing" will force them to implement it quickly.

Say in a years time when I'm looking to upgrade if perfromance is similar between Nvidia and AMD and FSR has implemented frame generation I will go for the AMD GPU as FSR adoption is outpacing DLSS quite noticeably and no one wants an Nvidia monopoly.
Hol' up!

There are currently 106 FSR supported games according to AMD's page. There are 192 games (edit* removed apps from the original 204 number) supporting DLSS, and that's just purely DLSS, disregarding the AI enhanced games.

The only reason it appears that FSR adoption has been really quick is because it literallyhad to be. Nvidia had already had a long list of DLSS supported games, devs had already implemented this form of upscaling, so incorporating AMD's variant wasn't exactly going to be an extra time consuming task either.

Frame gen requires specific hardware, and so far we have no idea on what the new AMD cards have onboard, if the hardware doesn't exist then how are they going to compete with frame gen to expand FSR with?

I don't like Nvidia's practices at all, but let's not beat around the bush, AMD was behind with this tech, still are and it isn't unreasonable to see that they will remain behind at least for another gen.
 
Last edited:
AMD was 2 years late to the DLSS party but honestly as an Nvidia user I really didn't notice. DLSS 2.0 adoption was painfully slow. On the other hand FSR adoption is much faster.

The thing is DLSS3 is frame interpolation which is nothing new and others have done it successfully in the past. I expect AMD to have an answer to that very fast. AMD probably thought PC gamers would not stand for cheap tricks like frame interpolation but seeing Nvidia doing and marketing it as something "amazing" will force them to implement it quickly.

Say in a years time when I'm looking to upgrade if perfromance is similar between Nvidia and AMD and FSR has implemented frame generation I will go for the AMD GPU as FSR adoption is outpacing DLSS quite noticeably and no one wants an Nvidia monopoly.

DLSS 2 was very slow until they got it added into the game engines and nvidia have bundled all their tech into their streamline solution now which allows them to deploy even faster (intel were onboard with this but amd refuse to partake).

FSR 1 was very quick due to how simple it was but it was awful, I refuse to use it in the amd games I was playing even when I could have done with more FPS @ 3440x1440, FSR 2 has been pretty slow as well and FSR 2.1, well even slower although obviously will change going forward now that they will be on the roadmap for any upcoming games in development now.

Essentially by the time FSR 2/2.1 has arrived, it was too late, I already played and completed the games I wanted to and just so happened to have the better experience/solution of dlss all along, some examples where I would have been waiting for years on FSR and to play said games are cp 2077, metro ee, control.

Frame interpolation has been around for a long time but not quite as widely used in gaming, I think some VR headsets use it? And someone posted about star wars force unleashed having an option on consoles? Which looked pretty bad and nowhere near what we have seen so far based on dlss 3/fg, obviously need to wait for more examples outside of nvidias PR games of CP 2077 and spiderman though..... DF have confirmed intel are getting their version out asap but they weren't sure on amd, which to me suggests they don't think it will happen anytime soon due to lack of focus on ml/ai, of course they could very well surprise us and announce something on the 3rd but going by their history of always playing catch up, I woudn't be surprised.

Hol' up!

There are currently 106 FSR supported games according to AMD's page. There are 204 games supporting DLSS.

The only reason it appears that FSR adoption has been really quick is because it literallyhad to be. Nvidia had already had a long list of DLSS supported games, devs had already implemented this form of upscaling, so incorporating AMD's variant wasn't exactly going to be an extra time consuming task either.

Frame gen requires specific hardware, and so far we have no idea on what the new AMD cards have onboard, if the hardware doesn't exist then how are they going to compete with frame gen to expand FSR with?

I don't like Nvidia's practices at all, but let's not beat around the bush, AMD was behind with this tech, still are and it isn't unreasonable to see that they will remain behind at least for another gen.

And a lot of those games are FSR 1 based.... not a dlss competitor (amd even said this), it's FSR 2/2.1 which is the competitor to dlss and it's in even less games then look at dlss 3 where it is already confirmed to be in 35 current and upcoming apps/games.... I wonder how much of this is down to amd not pushing to get their tech. in games, it's one of the main reasons they like open source and a throw over the fence approach as they generally come across like they don't want their limited resources/dev time to be spent assisting game development studios.

And that is very true too, as per amds own timeline chart:

sZP0ito.png

TBF, frame gen could be done through a software solution too as we have seen it done before but the question is, would it be a good experience? FSR 2/2.1 has shown you can get decent upsampling tech without the need for hardware but obviously it's still not quite there yet in terms of overall IQ but with some extra refinement, I'm sure it will get to the point where it is on par or exceeds dlss.
 
That is the question, yeah it can be done in software, but just like RT, software doing it is not a good experience at all. Many of us questioned why nvidia didn't enable FG on the 20 or 30 series as they have the hardware to support it, but multiple nvidia engineers have stated that the efficiency of that gen hardware would mean the experience is not a great one and people would complain, which is fair enough.

So if last gen's hardware supporting FG is not efficient enough, then there's no way software is going to produce anything better either really.
 
That is the question, yeah it can be done in software, but just like RT, software doing it is not a good experience at all. Many of us questioned why nvidia didn't enable FG on the 20 or 30 series as they have the hardware to support it, but multiple nvidia engineers have stated that the efficiency of that gen hardware would mean the experience is not a great one and people would complain, which is fair enough.

So if last gen's hardware supporting FG is not efficient enough, then there's no way software is going to produce anything better either really.

Never say never :D

I'm sure with time and some magical driver work, it could be possible but I'll be amazed if they get a solution between now and 2024/2025 :p
 
Last edited:
Hey I'm all hopeful, if it means I don't have to buy a new GPU that costs £1000+ a 2nd time, then that's great news! But the likelihood is very very slim let's be real :p
 
Last edited:
Hol' up!

There are currently 106 FSR supported games according to AMD's page. There are 192 games (edit* removed apps from the original 204 number) supporting DLSS, and that's just purely DLSS, disregarding the AI enhanced games.

The only reason it appears that FSR adoption has been really quick is because it literallyhad to be. Nvidia had already had a long list of DLSS supported games, devs had already implemented this form of upscaling, so incorporating AMD's variant wasn't exactly going to be an extra time consuming task either.

Frame gen requires specific hardware, and so far we have no idea on what the new AMD cards have onboard, if the hardware doesn't exist then how are they going to compete with frame gen to expand FSR with?

I don't like Nvidia's practices at all, but let's not beat around the bush, AMD was behind with this tech, still are and it isn't unreasonable to see that they will remain behind at least for another gen.
The bolded bit. I'm pretty certain that wasn't true prior to the release of the 3000 series. I think there was only a few games before the 3000 series came out that supported DLSS. The quick adoption for DLSS has only been in the last 2 years. Which is the same time frame that FSR has been around.
 
MLID said the 4090 is 60 to 80% faster than the 3090, not the 3090Ti, that prediction initially came from AMD insiders who predicted that themselves, who also said its what they are aiming against with RDNA 3 and what they can work with, what that means exactly i don't know.

Reviewers he's talking to have confirmed that is bang on, Nvidia's own slides say its on average 65% faster than the 3090Ti, those same reviewers also say its about 100% faster than the 3090 in RT.

I'm saying 50% faster that the 3090 because i think AMD can achieve that at least, i also think the RT performance is still going to lag some way behind the 4090 but will be much better than the 3090.

IMO a lot of reviewers will see that RT performance as not good enough and declare it a damp squib, with that i don't think AMD can charge much more than $1000 and even at that the market share will remain unchanged, the 4090 will be a runaway success and so the 5090 will go up to $1800 if not $2000 with everything below it being pulled up too

They will all say its all AMD's fault, nothing to do with us, and we need Raja to make a Mk3 Vega.

So 50% faster than a 3090 ~= 50% faster than the 6950XT.

For a GPU with 2.4x the shaders and around 50% more of everything else with what looks to be a 30%+ clock speed bump providing around 3.1x the Tflops just 50% performance is terrible scaling. If the 3.5 Ghz ballpark is even close to real then we are actually talking a >50% clock speed bump and nearly 4x the Tflops of the 6900XT.

The 6900XT with 2x everything and increased clocks by 18% managed 2x the performance vs the 5700XT.

I think AMD getting 2x on average vs the 6900XT is in the realms of real and it could be higher if the boost clock is closer to 3.5Ghz than 3Ghz like some are saying (others are saying as high as 3.8Ghz which is just nuts).

Personally I think the 7950XT is going to be 10% faster than the 4090 in pure raster at the low end. It could range to 25-30% quite easily making it 1 whole tier higher in performance. Even a cut down 20GB 7900XT could easily be about on par with or a bit ahead of the 4090.

RT is far harder to guess though but Intel did a good job 1st time around with ARC so no reason why AMD can't catch up.

EDIT: on the RT front though if AMD can match the performance loss of Ampere with a raster lead they can still do well. RT tends to drop 3090 performance by around 40% depending on the game so RT scores would be

If we normalise to a 3090 in raster and then use the above scaling (RT = 60% of raster performance and 4090 = double 3090)
card - raster -- RT
3090 - 100 -- 60
4090 - 180 -- 120
7950XT - 200 -- 120

So while technically AMD would be behind because they lose more performance when turning it on the end result is about the same. This is with just a 10% raster lead over the 4090.
 
Last edited:
LOL Enermax just trolled the internet.

Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.
Didn't someone from AMD PR say pretty much that? Something along the lines of we did get our 50% perf/watt improvement but then lost it all due to seeing we could clock the cards way too high and charge more for them? Although being PR they may have phrased it differently :p
 
So 50% faster than a 3090 ~= 50% faster than the 6950XT.

For a GPU with 2.4x the shaders and around 50% more of everything else with what looks to be a 30%+ clock speed bump providing around 3.1x the Tflops just 50% performance is terrible scaling. If the 3.5 Ghz ballpark is even close to real then we are actually talking a >50% clock speed bump and nearly 4x the Tflops of the 6900XT.

The 6900XT with 2x everything and increased clocks by 18% managed 2x the performance vs the 5700XT.

I think AMD getting 2x on average vs the 6900XT is in the realms of real and it could be higher if the boost clock is closer to 3.5Ghz than 3Ghz like some are saying (others are saying as high as 3.8Ghz which is just nuts).

Personally I think the 7950XT is going to be 10% faster than the 4090 in pure raster at the low end. It could range to 25-30% quite easily making it 1 whole tier higher in performance. Even a cut down 20GB 7900XT could easily be about on par with or a bit ahead of the 4090.

RT is far harder to guess though but Intel did a good job 1st time around with ARC so no reason why AMD can't catch up.

EDIT: on the RT front though if AMD can match the performance loss of Ampere with a raster lead they can still do well. RT tends to drop 3090 performance by around 40% depending on the game so RT scores would be

If we normalise to a 3090 in raster and then use the above scaling (RT = 60% of raster performance and 4090 = double 3090)
card - raster -- RT
3090 - 100 -- 60
4090 - 180 -- 120
7950XT - 200 -- 120

So while technically AMD would be behind because they lose more performance when turning it on the end result is about the same. This is with just a 10% raster lead over the 4090.

The 2080Ti had 4352 cuda cores, the 3090Ti has 10496, that is..... 2.4X, its not 140% faster, according to TPU the 3090TI is 67% faster.

Because the Ampere cuda cores don't have the same shader performance as the Turing cuda cores, they increased the number of them so it amounts to more RT and compute cores but taking away some space from the shader portion of the core, its a rebalancing of the core to better suit the current direction of what an Nvidia GPU is.

AMD also need much greater RT performance, one way they can do that is the same way Nvidia did it, so that 2.4X cores probably isn't going to scale, just like Nvidia's didn't going from Turing to Ampere.
 
Last edited:
Didn't someone from AMD PR say pretty much that? Something along the lines of we did get our 50% perf/watt improvement but then lost it all due to seeing we could clock the cards way too high and charge more for them? Although being PR they may have phrased it differently :p

Don't think so. They said it was a > 50% perf/watt increase and that was about it.

They did similar with RDNA to RDNA 2 and they increased TBP by 33% with a claimed 50% perf/watt increase. In the end we see the 6900XT has 2x the performance of a 5700XT at 4K and that is exactly what the numbers given come out to.

So if they are doing the same for RDNA 2 to RDNA 3 when making the perf/watt claim it could be very promising because 420W TBP is a 40% increase with the 50% perf/watt lower bound nets you 2.1x more performance for the 7950XT vs the 6900XT.
 
AFAIR, the AMD PR person said something more along the lines that due to competitve pressure they won't leave performance on the table, which to me implies that they will once again clock as high as possible in most SKUs. I'm sure they'll have one SKU where the > 50% perf/watt will be true but the rest will be clocked as high as possible.
 
Status
Not open for further replies.
Back
Top Bottom