Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
LOL Enermax just trolled the internet.
Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.
So similar to the 3090 vs 6900xt situation.Personally, if AMD could match the top tier product for less, even with lower RT performance, I might be interested.
So similar to the 3090 vs 6900xt situation.
What card are you running again?
LOL Enermax just trolled the internet.
Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.
“Performance is king,” said Naffziger, “but even if our designs are more power-efficient, that doesn’t mean you don’t push power levels up if the competition is doing the same thing. It’s just that they’ll have to push them a lot higher than we will.”
Well said @HRL especially the first sentence
I think DLSS 3/frame generation will also throw a spanner in the works for amd just as it did with them having no dlss competitor for nearly 2 years (only counting dlss >2 and fsr 2 as v1 was ****), believe or not, people don't want to be waiting months, let alone years for tech. like that and some will be quite happy to pay a small premium if it means getting it "now". FSR 2.1 has nearly caught up with dlss now after having tested it for myself in spiderman over the weekend but still doesn't beat dlss on temporal stability, shimmering and "overall" reconstruction especially when looking at the lower presets i.e. balanced and performance modes
People will say "fake frames" when it comes to frame generation/dlss 3 but at the end of the day, how do you measure performance? FPS and frame latency? Which is what DLSS 3/fg improves.... Only con of dlss 3/fg in "normal" gameplay will possibly be latency but this will come entirely down to the type of game, k+m vs controller use. In terms of artifacts, doesn't look to be any noticeable issues outside of slowing the footage down entirely and capturing the "fake frame", if what DF/nvidia have said where nvidia have been working on this for the past 6-7 years and with amds lack of focus on ML/AI so far, how long are we going to be waiting for them to deliver this feature?
Hol' up!AMD was 2 years late to the DLSS party but honestly as an Nvidia user I really didn't notice. DLSS 2.0 adoption was painfully slow. On the other hand FSR adoption is much faster.
The thing is DLSS3 is frame interpolation which is nothing new and others have done it successfully in the past. I expect AMD to have an answer to that very fast. AMD probably thought PC gamers would not stand for cheap tricks like frame interpolation but seeing Nvidia doing and marketing it as something "amazing" will force them to implement it quickly.
Say in a years time when I'm looking to upgrade if perfromance is similar between Nvidia and AMD and FSR has implemented frame generation I will go for the AMD GPU as FSR adoption is outpacing DLSS quite noticeably and no one wants an Nvidia monopoly.
AMD was 2 years late to the DLSS party but honestly as an Nvidia user I really didn't notice. DLSS 2.0 adoption was painfully slow. On the other hand FSR adoption is much faster.
The thing is DLSS3 is frame interpolation which is nothing new and others have done it successfully in the past. I expect AMD to have an answer to that very fast. AMD probably thought PC gamers would not stand for cheap tricks like frame interpolation but seeing Nvidia doing and marketing it as something "amazing" will force them to implement it quickly.
Say in a years time when I'm looking to upgrade if perfromance is similar between Nvidia and AMD and FSR has implemented frame generation I will go for the AMD GPU as FSR adoption is outpacing DLSS quite noticeably and no one wants an Nvidia monopoly.
Hol' up!
There are currently 106 FSR supported games according to AMD's page. There are 204 games supporting DLSS.
The only reason it appears that FSR adoption has been really quick is because it literallyhad to be. Nvidia had already had a long list of DLSS supported games, devs had already implemented this form of upscaling, so incorporating AMD's variant wasn't exactly going to be an extra time consuming task either.
Frame gen requires specific hardware, and so far we have no idea on what the new AMD cards have onboard, if the hardware doesn't exist then how are they going to compete with frame gen to expand FSR with?
I don't like Nvidia's practices at all, but let's not beat around the bush, AMD was behind with this tech, still are and it isn't unreasonable to see that they will remain behind at least for another gen.
That is the question, yeah it can be done in software, but just like RT, software doing it is not a good experience at all. Many of us questioned why nvidia didn't enable FG on the 20 or 30 series as they have the hardware to support it, but multiple nvidia engineers have stated that the efficiency of that gen hardware would mean the experience is not a great one and people would complain, which is fair enough.
So if last gen's hardware supporting FG is not efficient enough, then there's no way software is going to produce anything better either really.
So similar to the 3090 vs 6900xt situation.
What card are you running again?
The bolded bit. I'm pretty certain that wasn't true prior to the release of the 3000 series. I think there was only a few games before the 3000 series came out that supported DLSS. The quick adoption for DLSS has only been in the last 2 years. Which is the same time frame that FSR has been around.Hol' up!
There are currently 106 FSR supported games according to AMD's page. There are 192 games (edit* removed apps from the original 204 number) supporting DLSS, and that's just purely DLSS, disregarding the AI enhanced games.
The only reason it appears that FSR adoption has been really quick is because it literallyhad to be. Nvidia had already had a long list of DLSS supported games, devs had already implemented this form of upscaling, so incorporating AMD's variant wasn't exactly going to be an extra time consuming task either.
Frame gen requires specific hardware, and so far we have no idea on what the new AMD cards have onboard, if the hardware doesn't exist then how are they going to compete with frame gen to expand FSR with?
I don't like Nvidia's practices at all, but let's not beat around the bush, AMD was behind with this tech, still are and it isn't unreasonable to see that they will remain behind at least for another gen.
MLID said the 4090 is 60 to 80% faster than the 3090, not the 3090Ti, that prediction initially came from AMD insiders who predicted that themselves, who also said its what they are aiming against with RDNA 3 and what they can work with, what that means exactly i don't know.
Reviewers he's talking to have confirmed that is bang on, Nvidia's own slides say its on average 65% faster than the 3090Ti, those same reviewers also say its about 100% faster than the 3090 in RT.
I'm saying 50% faster that the 3090 because i think AMD can achieve that at least, i also think the RT performance is still going to lag some way behind the 4090 but will be much better than the 3090.
IMO a lot of reviewers will see that RT performance as not good enough and declare it a damp squib, with that i don't think AMD can charge much more than $1000 and even at that the market share will remain unchanged, the 4090 will be a runaway success and so the 5090 will go up to $1800 if not $2000 with everything below it being pulled up too
They will all say its all AMD's fault, nothing to do with us, and we need Raja to make a Mk3 Vega.
Didn't someone from AMD PR say pretty much that? Something along the lines of we did get our 50% perf/watt improvement but then lost it all due to seeing we could clock the cards way too high and charge more for them? Although being PR they may have phrased it differentlyLOL Enermax just trolled the internet.
Gotta say though, the power requirements don't align with what's been leaked on social media. Maybe someone at AMD decided to turn up the juice.
So 50% faster than a 3090 ~= 50% faster than the 6950XT.
For a GPU with 2.4x the shaders and around 50% more of everything else with what looks to be a 30%+ clock speed bump providing around 3.1x the Tflops just 50% performance is terrible scaling. If the 3.5 Ghz ballpark is even close to real then we are actually talking a >50% clock speed bump and nearly 4x the Tflops of the 6900XT.
The 6900XT with 2x everything and increased clocks by 18% managed 2x the performance vs the 5700XT.
I think AMD getting 2x on average vs the 6900XT is in the realms of real and it could be higher if the boost clock is closer to 3.5Ghz than 3Ghz like some are saying (others are saying as high as 3.8Ghz which is just nuts).
Personally I think the 7950XT is going to be 10% faster than the 4090 in pure raster at the low end. It could range to 25-30% quite easily making it 1 whole tier higher in performance. Even a cut down 20GB 7900XT could easily be about on par with or a bit ahead of the 4090.
RT is far harder to guess though but Intel did a good job 1st time around with ARC so no reason why AMD can't catch up.
EDIT: on the RT front though if AMD can match the performance loss of Ampere with a raster lead they can still do well. RT tends to drop 3090 performance by around 40% depending on the game so RT scores would be
If we normalise to a 3090 in raster and then use the above scaling (RT = 60% of raster performance and 4090 = double 3090)
card - raster -- RT
3090 - 100 -- 60
4090 - 180 -- 120
7950XT - 200 -- 120
So while technically AMD would be behind because they lose more performance when turning it on the end result is about the same. This is with just a 10% raster lead over the 4090.
Didn't someone from AMD PR say pretty much that? Something along the lines of we did get our 50% perf/watt improvement but then lost it all due to seeing we could clock the cards way too high and charge more for them? Although being PR they may have phrased it differently