• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD RDNA3 unveiling event

Why? Just look at ampere where there was about 10% difference between the non ti and ti cards.

The specs between 3080 and 3090 were quite close. That is not the case for 4080 and 4090, the 4080 is significantly cut down compared to the 4090.
 
Why? Just look at ampere where there was about 10% difference between the non ti and ti cards.

That's a poor comparison. The non ti to ti top GPUs had like 10% difference in core count. The 4090 has 60% more cores than the 4080!

In any game where the 4080 and 4090 are close, it's a failure for the 4090 to reach its potential for whatever reason
 
Last edited:
That's a poor comparison

The cards still existed despite being close on performance to their previous iterations. And they mostly got panned in reviews for being so close in performance to the non ti variants. The 12 gig 4080 originally had clock speeds higher than the 16 gig variation, so maybe that makes up the deficit in certain games?

Also i love how this is cropping up in the rdna3 thread.
 
Last edited:
I could understand a CPU bottleneck at 1440P but not at 4K, not unless Nvidia's driver overhead is that bad in which case 7900XTX might overtake it anyway...
 
Yeah but 4080 is close to the 4090, in certain games, because there is no CPU on the market that keeps up with the 4090 in every game (well it's either the CPU or something else bottlenecking the 4090 but that's the reality, the 4080 is within 10% of the 4090 in some games whether people like it or not)

Lol the 4090 is only ~12% faster than a 6950XT at 1080p as both are CPU bound. Jump up to 4K and it is ~65% faster because the bottleneck switches to the GPU. So cherry picking some CPU bound game or resolution and saying the 4080 is only 10% slower than a 4090 makes you look desperate, because even a 6950 or 3090Ti is “only” 10% slower than a 4090.

So no, the 4090 is not bottlenecked at 4K and the 4080 will be about 35% slower given the massively cut down specs. (and based on Nvidia’s own marketing).

That leaves the 7900XTX plenty of room to be ~ 15-20% faster than a 4080 and 15-20% slower than a 4090 in raster at 4K.

At 4K the 4080 is only 25ish% faster than a 3090Ti.

I will caveat this with “based on the Nvidia and AMD marketing slides”.
 
Yeah but 4080 is close to the 4090, in certain games, because there is no CPU on the market that keeps up with the 4090 in every game (well it's either the CPU or something else bottlenecking the 4090 but that's the reality, the 4080 is within 10% of the 4090 in some games whether people like it or not)
I'd wait for proper benchmarks rather than some random guy on a Chinese site, how's he even have drivers when the cards not released yet and still under NDA for another week so not even the press drivers would be available?.
 
really? i always use it when available on a 3080, more frames more better, or higher settings and same frames, the main pull for nvidia here is you can use dlss/fsr/xess but if you go amd you instantly lose any dlss only game improvements so amd needs to bring some beefier chops to the table to make up for that before i would jump ship
True - and with DLSS your card uses a lot less power and runs cooler. I play at 4k and DLSS is a godsend.

I wouldn't be surprised if Nvidia actually pays game developers to not put AMD FSR in their games.
 
I don't use DLSS because the games I want to play with eye candy are single player and don't require massive FPS, I prefer picture clarity over DLSS.
In competitive games, CSGO don't even have DLSS so...
 
Lol the 4090 is only ~12% faster than a 6950XT at 1080p as both are CPU bound. Jump up to 4K and it is ~65% faster because the bottleneck switches to the GPU. So cherry picking some CPU bound game or resolution and saying the 4080 is only 10% slower than a 4090 makes you look desperate, because even a 6950 or 3090Ti is “only” 10% slower than a 4090.

So no, the 4090 is not bottlenecked at 4K and the 4080 will be about 35% slower given the massively cut down specs. (and based on Nvidia’s own marketing).

That leaves the 7900XTX plenty of room to be ~ 15-20% faster than a 4080 and 15-20% slower than a 4090 in raster at 4K.

At 4K the 4080 is only 25ish% faster than a 3090Ti.

I will caveat this with “based on the Nvidia and AMD marketing slides”.
Also note the AMD slides stages it's upto 1.5x-1.7x and not on average. The Nvidia card faces more driver overhead so max fps is impacted as a result
 
Last edited:
Also note the AMD slides stages it's upto 1.5x-1.7x and not on average. The Nvidia card faces more driver overhead so max fps is impacted as a result
Performance will be a bit all over the board with RDNA3 due to the organization of the ALUs/SPs. 1.5-1.7x is likely an average across AAA titles. There will be games that see a bigger, and others that smaller delta. It's going to be interesting to see direct game-for-game performance comparison, and similarity, how AMD marketing positions these comparisons. Also will be interesting, assuming they stick with this implementation, if software development evolves to optimize here -- if the cart will drag the horse, so to speak -- across generations. My guess is in console-originated games, it will.
 
Last edited:
Isn’t the game count pretty similar between the two

Yeah I mean statements like that were valid for debunker two years ago. Currently its just misinformation now.

Its seems to be sliding down this trap again. Anti AMD posters drag the RDNA comparison to the 4090, and just want to talk about Ada cards. The two cards being released are at the thousand pound segment, they wont be beating in very many situations but thereabouts. Where they do beat the competition so far is where it matters, on price! This is your option.
 
Last edited:
Performance will be a bit all over the board with RDNA3 due to the organization of the ALUs/SPs. 1.5-1.7x is likely an average across AAA titles. There will be games that see a bigger, and others that smaller delta. It's going to be interesting to see direct game-for-game performance comparison, and similarity, how AMD marketing positions these comparisons. Also will be interesting, assuming they stick with this implementation, if software development evolves to optimize here -- if the cart will drag the horse, so to speak -- across generations. My guess is in console-originated games, it will.


The question becomes will the ps5 pro console use rdna4 multi chiplet GCd?
 
Performance will be a bit all over the board with RDNA3 due to the organization of the ALUs/SPs. 1.5-1.7x is likely an average across AAA titles. There will be games that see a bigger, and others that smaller delta. It's going to be interesting to see direct game-for-game performance comparison, and similarity, how AMD marketing positions these comparisons. Also will be interesting, assuming they stick with this implementation, if software development evolves to optimize here -- if the cart will drag the horse, so to speak -- across generations. My guess is in console-originated games, it will.

They also had charts with UP TO on RDNA2 which ended up being averages

Some in the Nvidia camp really don't want it to be true don't understand because we should all want them to do well it only will help prices in the long run
 
They also had charts with UP TO on RDNA2 which ended up being averages

Some in the Nvidia camp really don't want it to be true don't understand because we should all want them to do well it only will help prices in the long run
With RDNA 2 we could see their confidence as they outright compared the 6900XT with the 3090 in a much wider variety of games. This time it's smokes and mirrors with some numbers showing FSR, some without it etc.

Not sure where the part of us wanting AMD to fail comes from. They need to catch up with Nvidia on the feature set atleast with the halo card
 
Last edited:
Isn’t the game count pretty similar between the two
You can mod FSR 2.0 in most games having DLSS. The problem isn't the game count but rather it's worse than DLSS when the game is in motion.

In Cyberpunk for instance, DLSS starts having issues with shimmering on neon signs in the distance. You can see a faint white border around the signs which flickers and disappears the moment you get close to it. FSR has the same flaw except you also see more jaggy edges and more ghosting in motion.
 
Back
Top Bottom