• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Not much excitement for RDNA3, compared to the RTX 4000 series

If AMD can do a 6700 XT successor, with RX 6950X like performance, and improved RT for £400-£500, I think they will be winning pretty hard. In reality, the prices will likely be above a typical MSRP price (unless AMD actually decides to sell a lot more reference models!).

This card in particular, I think still offers very good 1080p performance, and reasonable value (at least, it does now).

AMD, like Nvidia has an unfortunate habit of releasing far too many similar products (with different variations like RAM sizes and types), which tends to only push up prices for consumers even more. This is likely to happen with the successors to the 6800, 6800 XT and 6900 XT.
 
Last edited:
Poor Volta?

As mentioned, AMD have been happy with selling consoles as fast as they can make them. What little RDNA2 cards they made also sold out at higher than normal prices.

I don't think RDNA3 will be back to the days they can't compete at the high end, but I can't see them beating the 4090. If they can compete in the $500-$1000 range, I think that is a good outcome.

Also whilst people pretend they don't care, RT does matter and FSR needs to improve to DLSS2 levels. I am happy they ditched FSR1 which was ridiculous and just a sharpening filter.

I completely agree about RT. For me FSR and DLSS are both great, obviously DLSS is more mature, but expect things to converge in that space over the next few years: information out vs. information in and some fundamental limitations

IMO, this generation of NV and AMD GPU's is going to be the most interesting in ages.

NVidia have thrown 3x the transistors for a <2x raster performance increase, largely due to dedicated RT, tensor hardware & more cache.
AMD are going for generic compute, albeit not as generic as the Vega days, so their raster performanceshould have closer to linear scaling vs transistors.

The ultimate question is: Which approach scales best. It was no surprise last gen that NV easily won the RT performance crown, but they have added A LOT of transistors this gen to give the required RT gen on gen leap.

My complete guess is that RDNA2 has a ton of low hanging fruit as a uarch and RDNA3 will be a raster beast, with ~2x RT performance. The MCM approach will hopefully keep prices at a more reasonable level, like what we say with Zen 1, 2 & 3.
 
If it were not for the unlaunched 4070 12GB edition and the 4080 @ £1200 there would be fappers all over the Ada stack already!
 
Even if it were true, people would still find a reason to buy nVidia.

Well if it is anything like rdna 2 vs ampere, it will be for the same reason(s) again:

- better upscaling software in terms of performance and IQ and better uptake in games
- better RT performance (which is even more important now than it was back on ampere release given the uptake it has seen)
- CUDA
- nvenc streaming, although apparently amd are using the new encoder av1 this time round
- lack of being able to buy for MSRP in UK
- frame generation, most reports from end users seem to rate it when not slowing footage down or/and picking out the "fake" frames

I'm looking forward to seeing how amd have addressed their RT performance though.
 
Last edited:
Well if it is anything like rdna 2 vs ampere, it will be for the same reason(s) again:

- better upscaling software in terms of performance and IQ and better uptake in games
- better RT performance (which is even more important now than it was back on ampere release given the uptake it has seen)
- CUDA
- nvenc streaming, although apparently amd are using the new encoder av1 this time round
- lack of being able to buy for MSRP in UK
- frame generation, most reports from end users seem to rate it when not slowing footage down or/and picking out the "fake" frames

I'm looking forward to seeing how amd have addressed their RT performance though.

Better when it comes to IQ is subjective and if you’ve previously bought nVidia, it’ll probably stay that way.

Again, nVidia’s marketing is genius. Convincing people to fork out for expensive hardware, just so they get access to software to generate fake frames.

I’m not sure what AMD can do to pull people away from nVidia. I mean I don’t use nvenc or CUDA or RT so those don’t really matter to me which gives me more choice.

You’ll get people citing the list above, who don’t even use those features, as a reason to not switch.
 
Last edited:
Better when it comes to IQ is subjective and if you’ve previously bought nVidia, it’ll probably stay that way.

Again, nVidia’s marketing is genius. Convincing people to fork out for expensive hardware, just so they get access to software to generate fake frames.

I’m not sure what AMD can do to pull people away from nVidia. I mean I don’t use nvenc or CUDA or RT so those don’t really matter to me which gives me more choice.

You’ll get people citing the list above, who don’t event use those features, as a reason to not switch.

Pretty much every site has agreed that dlss is better.... HUB recently just did a video stating the same too and raised the exact same strengths and weaknesses of FSR + DLSS as DF so you can't resort to the "nvidia shills" excuse :p

Why does it matter? How do you measure performance? By FPS and frame latency, which is what frame generation improves.... we have had plenty of end users state that they are impressed with it (when not doing in depth testing where footage is slowed down/paused like DF, HUB did), it's not perfect but it's a good feature to have when you need more performance and it will only improve, same way fsr and dlss improved from their first versions. I couldn't care less about how they do it as long as the end results during normal gameplay are good, which they appear to be going by "consumers" such as bang4buck, the main thing is again, choice to use it.

I don't use CUDA either but for people who use that, they have literally no other choice.....

That list is just an example of why people most likely and will continue to stick with nvidia.

RT - well majority of games I have played over the past 2 years have all used it to some extent and the uptake is showing no signs of slowing down, obviously everyone's game taste and preference of settings/fps etc. is different so there will be people who don't care at all for this but I don't think this will be too big of an issue with rdna 3 anyway as so far all the "rumours" seem to point that it will at least be matching/beating ampere RT perf.

Nvenc/streaming abilities, I don't really care for either, I was able to get good results with amd although no doubt nvidia does much better especially if you don't want to faff with settings but it is a big reason for many gamers, you don't see many frequent twitch/youtubers using amd gpus...

Lastly, availability and pricing, you can have a great card, which competes but if you don't have the ability to buy for MSRP and will end up paying more for a gpu, which might not be as good of a "package" overall, that's a big problem imo.

As for what can amd do.... imo the answer is simple, amd just need to start innovating/bringing out their version of features sooner than later and not playing catchup all the time especially when that catchup spans across several months or/and years..... I used to be the same and loved some "fine wine" back when I used amd cards (especially since I kept gpus for much longer back then than now) but now I want to get the best experience possible from day 1 especially with the prices of gpus nowadays so I'm quite happy to pay a slight premium (within reason) to get that.
 
Has AMD got proper GAMESTREAM alternative for streaming the PC elswhere in the house and gaming? If so, CHOO CHOO for me...
 
I wouldn't say they are slow at innovating I personally think AMD have released some great things recently. I was just on their website and one year on, FSR is apparently supported in over 110 games. That uptake seems good to me. And it is nice that Nvidia can use FSR. AMD also released RSR which has a big advantage over DLSS (and FSR). It doesn't need game developers to enable support for it. Pretty cool stuff. AMD also beat Nvidia to the punch with SAM. AMD are also at the forefront of chiplet architecture for GPU design. So they certainly are not playing catch up there. Plus AMD have the one interface for everything, which is nicer to have than Nvidia's split approach.
 
i actually like nvidia's split interface.. which means the bare minimum stuff required to control your hardware just works always out of the box without external dependencies, no need to install vc redistributables and all that jazz
regarding MCM, i think even intel's doing it with their Xe chips for machine learning and nvidia too will be doing it when they believe the performance/cost tradeoff is worth it

also why could amd use a MCM architecture in CPUs so successfully is because intel is still a few generations behind in fabrication tech, if intel achieves fab parity the monolithic design that intel uses would have been more efficient and performant than AMDs, theres no way a signal sent outside a chip is going to be more efficient that a signal processed within the chip. there's also a hard limit on how big a die area can amd aim with its MCM approach.. so right now i feel its just a big coincidence that the MCM approach is even working, once intel figures out its fabs would amd regress to a monolithic die to maintain performance/efficiency parity? i think they would
 
Last edited:
I wouldn't say they are slow at innovating I personally think AMD have released some great things recently. I was just on their website and one year on, FSR is apparently supported in over 110 games. That uptake seems good to me. And it is nice that Nvidia can use FSR. AMD also released RSR which has a big advantage over DLSS (and FSR). It doesn't need game developers to enable support for it. Pretty cool stuff. AMD also beat Nvidia to the punch with SAM. AMD are also at the forefront of chiplet architecture for GPU design. So they certainly are not playing catch up there. Plus AMD have the one interface for everything, which is nicer to have than Nvidia's split approach.

The only worthwhile somewhat recent thing I can think that amd got to first (although not exactly a "new" thing as it has always been there and enabled with linux) and have done a better job with is SAM/rebar (largely because they have access to the motherboard chipset drivers to do any extra tweaks) but then nvidia released rebar support literally within days too (which does show benefits for supported games and some unsupported games too, the gaps we saw with assassins creed, forza have been considerably closed with the recent dx 12 nvidia driver so I don't think SAM/rebar was the reason for those big gains we saw with amd?) everything else like FSR, freesync, RT, streaming/nvenc, rtx voice etc. they arrived considerably later and still in some aspects with those aren't quite on par with nvidias solutions yet.

BTW that FSR uptake is including FSR 1, FSR 2 and especially 2.1 is considerably lower.

Nvidia released their RSR competitor not long after, NIS although I wouldn't use either of them especially when games are implementing FSR 2 and DLSS.

That's the difference between them, amd take longer and when they do deliver, they are usually lacking on the quality front.

Time will tell with how the chiplet design plays out, in theory it sounds great but the one thing I am concerned about is what work do game developers need to do to get the best from it? I don't know enough about how it all works but if anything like CF/SLI, then support/optimisation could be poor especially if developers are reliant on amd to do any optimisations.

Presume you are referring to geforce experience and CP vs amds control panel? Agree, usability wise, amd are far better here and much quicker but how often do people go into their drivers to change settings? Although I personally prefer/like the geforce experience overlay. Performance overlay is nicer too although MSI AB + RT beats both.
 
If AMD can do a 6700 XT successor, with RX 6950X like performance, and improved RT for £400-£500, I think they will be winning pretty hard. In reality, the prices will likely be above a typical MSRP price (unless AMD actually decides to sell a lot more reference models!).

This card in particular, I think still offers very good 1080p performance, and reasonable value (at least, it does now).

AMD, like Nvidia has an unfortunate habit of releasing far too many similar products (with different variations like RAM sizes and types), which tends to only push up prices for consumers even more. This is likely to happen with the successors to the 6800, 6800 XT and 6900 XT.

Yep, I might actually buy that card rather than skipping this gen altogether.

6700XT is still very good at 1440p nevermind 1080p.
 
Well if it is anything like rdna 2 vs ampere, it will be for the same reason(s) again:

- better upscaling software in terms of performance and IQ and better uptake in games
- better RT performance (which is even more important now than it was back on ampere release given the uptake it has seen)
- CUDA
- nvenc streaming, although apparently amd are using the new encoder av1 this time round
- lack of being able to buy for MSRP in UK
- frame generation, most reports from end users seem to rate it when not slowing footage down or/and picking out the "fake" frames

I'm looking forward to seeing how amd have addressed their RT performance though.


I'm looking forward to ******** on AMD's RT perfomance again though.

I hope you enjoyed your free trial of Nvidian speech translator.

:)
 
Last edited:
Back
Top Bottom