• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

performance has hardly moved in terms of price points for 3 years now, 3060 is about as fast as a 2070 and costs the same, a 3060ti is slightly faster than a 2080 for the same money and a 3070 is slightly slower than a 2080ti and comes with 3gb less VRAM while coming in 100 quid cheaper. 3090 is about 40% faster than a 2080ti but is over 100% more expensive etc.

Maybe they are preparing us plebs for our game streaming overlords!
 
Meanwhile the fine wine is real and soon the Big Navi will not need upscaling anyway. :)
https://twitter.com/3DCenter_org/status/1408771777962995718

Note the lack of RT performance charts. AMD's method for DXR is 30-50% slower than nvidia and no driver update will fix it. Given DXR is the future of gaming. There is no real fine wine.

Also if you read your own source and these are 4k cards.

Radeon RX 6800 XT vs. GeForce RTX 3080
+6,7% NV ➔ +2,3% NV

Radeon RX 6900 XT
vs. GeForce RTX 3090
+10,9% NV ➔ +8,0% NV

So by your own source Nvidia is faster at raster as well. Even after fine wine arguments.

This is stated, "The Radeon RX 6800 XT takes the GeForce RTX 3080 from the same under FullHD and WQHD, only the 4K resolution remains (with a slight difference) with the nVidia card. " This is the conclusion of many websites in reviews at the time of the AMD 6000 series launch. 3080 ahead at 4k only. Given these are priced as 4k cards, who do you think wins?

Lets use a second source, techpowerup. We can take the 3080 ti review with the latest drivers.

Top two cards from NVidia and AMD. https://www.techpowerup.com/review/evga-geforce-rtx-3080-ti-ftw3-ultra/27.html average fps
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled is taken for custom overclocking a 3080 ti with waterblock.

1080p
6800xt - 202.8
6900xt - 211.6

3080 FE - 194.6
3080 ti - 205.3
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 208.6
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 213.7
3090 - 208.2

1440p
6800xt - 163.6
6900xt - 171.3

3080 FE - 159.6
3080 ti - 170.9
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 174.4
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 178.9
3090 - 173.7

4k
6800xt - 98.7
6900xt - 105.8

3080 - 100.0
3080 ti - 110.9
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 114.8
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 118.1
3090 - 112.5

The 3080 ti is faster than the 6900xt. 3080 is faster than the 6800 xt. Remember these are high resolution gaming cards. 4k gaming. There is not enough fps on average to justify picking an AMD card for raster performance at lower resolutions. Also you never pick a gpu based on one games performance. You can get near 3080 ti raster performance out of an overclocked decent 3080 and you can get near 6900xt raster performance out of a 6800xt overclocked. RTX 3080 Ti is the perfect choice for 4K gaming at 60 FPS and above. It's probably the only resolution you should consider for this beast because even at 1440p, some titles were CPU limited—for 1080p, it's definitely overkill. Were is the fine wine? Given the fact if you moved onto current RT games we would find 30-50% performance lead for nvidia. Given how close the 3080 is in raster to the 6800xt and how far ahead it is compared to the 6900xt in RT. There is no reason to buy a 6800xt over a RTX 3080. There is no reason to get any other top card both the 3090 and 6900xt and reject the 3080 ti. The only reason for get a 3090 is 8k gaming, in games like Death Stranding.
 
True, fine wine (when AMD fixes it's stupid drivers) is gonna make big Navi 100% faster in ray tracing so they can avoid upscaling. !RemindMe

They is no such thing as fine wine.

All it true is optimisation of the GPUs over time. Amd learned so much with GCN and past architecture to better reduce rendering load.

I will predict that over the life time of next generation consoles you will see more performance once developer better use rdna and Ryzen for the engine and games.
 
Isn't that exactly the point. "Fine Wine"/Driver optimization, we can call it whatever we want but we have seen in the past that AMD GPU performance has increased or increased further than competing Nvidia products after their initial launch. We need to wait for around 6 months more before concluding that the same situation exists for the current generation as well. But there is one incontrovertible fact, for 1440p or below AMD has the fastest card (reference to reference) in the market.
 
Note the lack of RT performance charts. AMD's method for DXR is 30-50% slower than nvidia and no driver update will fix it. Given DXR is the future of gaming. There is no real fine wine.

Also if you read your own source and these are 4k cards.

Radeon RX 6800 XT vs. GeForce RTX 3080
+6,7% NV ➔ +2,3% NV

Radeon RX 6900 XT
vs. GeForce RTX 3090
+10,9% NV ➔ +8,0% NV

So by your own source Nvidia is faster at raster as well. Even after fine wine arguments.

This is stated, "The Radeon RX 6800 XT takes the GeForce RTX 3080 from the same under FullHD and WQHD, only the 4K resolution remains (with a slight difference) with the nVidia card. " This is the conclusion of many websites in reviews at the time of the AMD 6000 series launch. 3080 ahead at 4k only. Given these are priced as 4k cards, who do you think wins?

Lets use a second source, techpowerup. We can take the 3080 ti review with the latest drivers.

Top two cards from NVidia and AMD. https://www.techpowerup.com/review/evga-geforce-rtx-3080-ti-ftw3-ultra/27.html average fps
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled is taken for custom overclocking a 3080 ti with waterblock.

1080p
6800xt - 202.8
6900xt - 211.6

3080 FE - 194.6
3080 ti - 205.3
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 208.6
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 213.7
3090 - 208.2

1440p
6800xt - 163.6
6900xt - 171.3

3080 FE - 159.6
3080 ti - 170.9
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 174.4
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 178.9
3090 - 173.7

4k
6800xt - 98.7
6900xt - 105.8

3080 - 100.0
3080 ti - 110.9
EVGA GeForce RTX 3080 Ti FTW3 Ultra - 114.8
ASUS GeForce RTX 3080 Ti STRIX LC Liquid Cooled - 118.1
3090 - 112.5

The 3080 ti is faster than the 6900xt. 3080 is faster than the 6800 xt. Remember these are high resolution gaming cards. 4k gaming. There is not enough fps on average to justify picking an AMD card for raster performance at lower resolutions. Also you never pick a gpu based on one games performance. You can get near 3080 ti raster performance out of an overclocked decent 3080 and you can get near 6900xt raster performance out of a 6800xt overclocked. RTX 3080 Ti is the perfect choice for 4K gaming at 60 FPS and above. It's probably the only resolution you should consider for this beast because even at 1440p, some titles were CPU limited—for 1080p, it's definitely overkill. Were is the fine wine? Given the fact if you moved onto current RT games we would find 30-50% performance lead for nvidia. Given how close the 3080 is in raster to the 6800xt and how far ahead it is compared to the 6900xt in RT. There is no reason to buy a 6800xt over a RTX 3080. There is no reason to get any other top card both the 3090 and 6900xt and reject the 3080 ti. The only reason for get a 3090 is 8k gaming, in games like Death Stranding.
30-50% is a big range. What is the actual Ray Tracing advantage of Nvidia over AMD after taking into consideration specific RTX Ray Tracing, Nvidia Sponsered games, etc.? I have seen it to be around~ 30% on average.
Also, for 1440p, RX 6900 XT is faster than 3080ti to the point where it is noticeable (not by much but still).
https://youtu.be/n4_4SKtq_Gs?t=542

But I do agree with your point that 6900xt, 3090, and 3080ti are not be considered from a price/performance perspective. But if you want the fastestst raster card at 1440p you have to go with 6900xt.
 
Last edited:
They is no such thing as fine wine.

All it true is optimisation of the GPUs over time. Amd learned so much with GCN and past architecture to better reduce rendering load.

I will predict that over the life time of next generation consoles you will see more performance once developer better use rdna and Ryzen for the engine and games.

I have been watching all the tech videos for Unreal Engine 5. That is one of the console supporting games engines. The lighting engine Luma - Global Illumination for faster and better lighting. Also nanite for Geometry. Most of it is software based and raster. They only use the gpu hardware when hardware methods are faster. From what they are stating is software is far faster for most tasks. Also they use a temporal upscaling method as well which looks better than FSR. This is why NVidia is pushing better quality RT and better qaulity DLSS upscaling. Why get a PC with an expensive gpu if it offers little or nothing more than a console. This is why games like Control or metro exodus enhanced edition are important. Console games are going to look unbelievable and with the better tools available they will be faster to develop. Huge open world games, with detailed geometry. Cities full of people.

PC gaming has to look far better and richer. Thats why NVidia is pushing RT and DLSS. Why get a PC, spend all that extra money. This is were the AMD 6000 series is bad for PC gaming. It just offers faster than console gaming but not a lot more feature wise. PC gaming would be just a little better than console but far more expensive. PC hardware must push better RT and higher resolution with something like DLSS.
 
Is this a serious comment? I'm sure that if one of them they could conjure up a GPU that performed an order of magnitude better, at the same cost, that they would do it. Added to that, games developers also want to appeal / sell to as wide an audience as possible, not only those with state-of-the-art GPUs. This leads to techniques being developed that allow compromises to be chosen, and the game run as widely as possible.
Yes. Why wouldn't it.
They do just enough work to make sales they want. See rtx 2000 series.

If they push better performance at the top. The lower end should also increase. Everyone wins.
 
Last edited:
30-50% is a big range. What is the actual Ray Tracing advantage of Nvidia over AMD after taking into consideration specific RTX Ray Tracing, Nvidia Sponsered games, etc.? I have seen it to be around~ 30% on average.

Depends on the game but for metro exodus its approx. for the 3080 30% above the 6900xt and 50% above for the 3090. Other reviews put values between 30% and 50%. The overall conclusion is AMD performance in DXR cannot match nvidia. That the difference is in the hardware. If games use lighter RT features and limit the use of RT then AMD can come out on top. If the game heavy uses RT then Nvidia will win.

Unreal Engine 5 is more software based. Luma is fast and good quality. Nanite provides massive detail to objects. You can improve the rendering by using RT but its slower. Without heavy RT use and powerful RT hardware, how does PC gaming justify its existance in the face of consoles with jaw dropping graphics. If PC's are just a little faster than console and the experience is the same ingame. Why not just get a console?

This is why DLSS/FSR features and RT is important to PC gaming in the long run. PC's cant just be faster consoles like AMD wants. There has to be something to differentiate and define PC gaming as something more. Something worth the extra cost. Thats what nvidia is trying to provide because their whole business depends on it.
 
Last edited:
A fair way off topic at this point, but there might be a new and different "fine wine" by the time we get to the end of this generation with the big difference in vram amount between amd and nvidia. I suspect that in a couple years we may see the 3070 and maybe even the 3080 struggling in some of the most demanding games more than the 6800 and xt
 
Depends on the game but for metro exodus its approx. for the 3080 30% above the 6900xt and 50% above for the 3090. Other reviews put values between 30% and 50%. The overall conclusion is AMD performance in DXR cannot match nvidia. That the difference is in the hardware. If games use lighter RT features and limit the use of RT then AMD can come out on top. If the game heavy uses RT then Nvidia will win.

Unreal Engine 5 is more software based. Luma is fast and good quality. Nanite provides massive detail to objects. You can improve the rendering by using RT but its slower. Without heavy RT use and powerful RT hardware, how does PC gaming justify its existance in the face of consoles with jaw dropping graphics. If PC's are just a little faster than console and the experience is the same ingame. Why not just get a console?

This is why DLSS/FSR features and RT is important to PC gaming in the long run. PC's cant just be faster consoles like AMD wants. There has to be something to differentiate and define PC gaming as something more. Something worth the extra cost. Thats what nvidia is trying to provide because their whole business depends on it.
Actually, it's all over the place depending upon what game you chose and how you want to limit the test scenario. I have seen RX 6900 XT being 19% slower than 3080 to 3090 being 70% faster than 6900XT in Metro EE, we can't draw any concrete conclusion from this except that Nvidia is faster in Metro EE. Also, What do we mean by the term heavy RT? There is an RT benchmark where both Nvidia and AMD get single-digit performance but Nvidia wins. Should that be taken as a heavy RT scenario or what about the limited use of RT, should RE Village be taken as an example of low RT use? There is no definite answer to this.

About the justification of PC gaming, I agree with you but will add that RT has done jack **** for advancing PC gaming since its inception in hardware from ~3 years ago. It's only when consoles picked up RT that we now have something being done on PC as well.

Note: Just to be clear I am not in any way suggesting AMD has better Ray Tracing than Nvidia atm.
 
"build better gpus"

Easier said than done.....
Nobody cares how hard it is to do. . People on forums go crazy when one gpu is better than the other and nobody cares how hard the teams worked to design their GPU. See this thread with people panning FSR. I'm sure amd worked hard to make it.

Thankfully dlss/FSR allows us to enjoy ray traced intensive games 60+ fps @1440 or 4k now (and for nvidia users, they have been able to enjoy it for the last what 10 months now? [when games actually started to use ray tracing better] as opposed to having to wait potentially another 2/3+ years till we get to the stage we are at now.
Intensive RT is not the same as good graphics. Good graphics also needs good design, art direction, models and textures. RT is technically optional because you can achieve good results with prebaked lights.

But hey if all you care about is pretty reflections then good on you.
 
Opened the AMD reddit and noticed a curious thread. FSR is working on GPUs that are older than the official list.

Much older.

Seems all you need is a GPU that supports DX11.

https://old.reddit.com/r/Amd/comments/o7zt8h/despite_having_no_official_support_fidelityfx/

Keep expectations in check of course, more fps on a much older card might still be low fps in a modern game that uses FSR :p

People seem to be confusing "officially supported" with "it won't work on other products for sure". Supporting that many products (which means testing your software on them and optimising) would be very costly without any good benefits, hence AMD officially supports only sensibly new GPUs. But they didn't block it from working on anything else, including iGPUs from Intel.
 
Nobody cares how hard it is to do. . People on forums go crazy when one gpu is better than the other and nobody cares how hard the teams worked to design their GPU. See this thread with people panning FSR. I'm sure amd worked hard to make it.


Intensive RT is not the same as good graphics. Good graphics also needs good design, art direction, models and textures. RT is technically optional because you can achieve good results with prebaked lights.

But hey if all you care about is pretty reflections then good on you.

What NVIDIA said quite a few times in their marketing that the main advantage of RTX is "it just works". They did not mean by that it's going to be SO MUCH better, but that it will make devs lives easier - ergo, it will lower production cost. Has it, really, though? I do not believe so. For once, GPUs that can use RT are still a minuscule part of the gaming market and implementing good RT (as you said, with proper art direction, design etc.) cost money, as they still need to provide older rendering (prebaked) for all other GPUs anyway. Effectively, they are doing double the work (so cost is much higher). Hence, unless NVIDIA want to do it themselves (send engineers to implement it) or sponsor the game (pay for it), most devs still aren't really that interested in RTX. Then, in case of new consoles it's a huge market and actually worth implementing tech that work on them. In effect, we will most likely get mostly console titles with RT, which will work very well both on AMD and NVIDIA GPUs. DXR, not RTX. We might also see very similar thing with FSR vs DLSS - the former is pretty much free to implement (takes almost no time and works on everything), the latter works on tiny number of GPUs and is much more costly (time-consuming) to implement.

Time will show how market really evolves, but history shows that irrelevant of quality, the cheaper solution gain mass-market adoption.
 
A fair way off topic at this point, but there might be a new and different "fine wine" by the time we get to the end of this generation with the big difference in vram amount between amd and nvidia. I suspect that in a couple years we may see the 3070 and maybe even the 3080 struggling in some of the most demanding games more than the 6800 and xt

To be honest I see most of this generation struggling in the longer run - the AMD cards lack the performance when games start to make more extensive use of ray traced GI, etc. nVidia cards 3090 aside lack the VRAM for next generation titles with more advanced effects.

One of the reasons I find the prices ludicrous - unless games hold back on progress people are going to be regretting the kind of money they've spent on these GPUs IMO.
 
To be honest I see most of this generation struggling in the longer run - the AMD cards lack the performance when games start to make more extensive use of ray traced GI, etc. nVidia cards 3090 aside lack the VRAM for next generation titles with more advanced effects.

One of the reasons I find the prices ludicrous - unless games hold back on progress people are going to be regretting the kind of money they've spent on these GPUs IMO.

This was almost always the case with GPUs and future games. Especially if you bought top of the line (and also very expensive one) - they age just as fast as the one lower but for that price difference you could easily get a new, faster gen, usually. Hence, I never buy computer parts for the future but for here and now. Things like GPU age very quickly, CPU also but less quickly. If a worthy replacement shows up (usually next gen 1.5-2 years later), then I sell current one, add a bit more money and buy that next gen one. That way I am usually up to date as they come, adding very little money each time and generaly always get better experience than owners of previous gen top of the line GPUs, even though I never buy top models. And in current market I was able to sell my old 2070S for more than I actually bought it, hence upgrade to current RX 6800 cost me £0 - even when I had to pay £80 above MSRP for it.

But to summarise - if you buy smartly, not chasing some top and very expensive GPUs, you will always have good FPS in current (at that time) games, whilst spending less money in summary. Hence, problems with "not a good performance in the longer run" just do not apply with such approach.
 
This was almost always the case with GPUs and future games. Especially if you bought top of the line (and also very expensive one) - they age just as fast as the one lower but for that price difference you could easily get a new, faster gen, usually. Hence, I never buy computer parts for the future but for here and now. Things like GPU age very quickly, CPU also but less quickly. If a worthy replacement shows up (usually next gen 1.5-2 years later), then I sell current one, add a bit more money and buy that next gen one. That way I am usually up to date as they come, adding very little money each time and generaly always get better experience than owners of previous gen top of the line GPUs, even though I never buy top models. And in current market I was able to sell my old 2070S for more than I actually bought it, hence upgrade to current RX 6800 cost me £0 - even when I had to pay £80 above MSRP for it.

But to summarise - if you buy smartly, not chasing some top and very expensive GPUs, you will always have good FPS in current (at that time) games, whilst spending less money - hence problems with "not a good performance in the longer run" just do not apply.

Which is why I've generally tried to go for the cards which are close to the top end but a lot less money such as the 780GHz edition. There are exceptions to that though - early adopters of cards like (specifically these) the 980ti, 1080ti or 2080ti would have got a decent run for their money - those that bought those cards well into the lifecycle though not so much.
 
What is the actual Ray Tracing advantage of Nvidia over AMD after taking into consideration specific RTX Ray Tracing, Nvidia Sponsered games, etc.? I have seen it to be around~ 30% on average.

For actual raw throughput in path traced GI, etc. the AMD cards are a full 50% slower than Ampere - whether we will see games push those features to that extent within the useful lifecycle of these cards is another matter.

nVidia also seem to have a internal DLSS model trained on ray tracing (i.e. for games which are fully path traced - not just stuff like CP2077) which is twice as efficient as the publicly released model so far (aside from some licensing issues I'm not sure why they've not made more of an effort to implement that in Quake 2 RTX - though there are some compatibility problems between the way Quake 2's source is licensed and the way DLSS is implemented and licensed which would require nVidia to release more of the source than it seems they want to).
 
Back
Top Bottom