• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Which is why I've generally tried to go for the cards which are close to the top end but a lot less money such as the 780GHz edition. There are exceptions to that though - early adopters of cards like (specifically these) the 980ti, 1080ti or 2080ti would have got a decent run for their money - those that bought those cards well into the lifecycle though not so much.

2080Ti? Not so sure it was a great choice, considering how much more expensive it was comparing to 1080Ti, or 2080/2080S. I moved from 1080Ti (which I bought quite late, and it was still awesome for a long time) to 2070S (just to try RT and DLSS - big mistake of mine, totally wasn't worth it for just these 2 things) with £0 money added. Then from 2070S to 6800 (I wanted XT but not available, so I got what I could) later last year, for also £0 added. Not bad experience overall, considering the cost being pretty much £0. But yes, I tend to go for x80 series when I can, occasionally for x70 if they are good enough (both from NVIDIA and AMD - I don't care much about the vendor). Ti models tend to be very silly priced when new on the market.
 
2080Ti? Not so sure it was a great choice, considering how much more expensive it was comparing to 1080Ti, or 2080/2080S. I moved from 1080Ti (which I bought quite late, and it was still awesome for a long time) to 2070S (just to try RT and DLSS - big mistake of mine, totally wasn't worth it for just these 2 things) with £0 money added. Then from 2070S to 6800 (I wanted XT but not available, so I got what I could) later last year, for also £0 added. Not bad experience overall, considering the cost being pretty much £0. But yes, I tend to go for x80 series when I can, occasionally for x70 if they are good enough (both from NVIDIA and AMD - I don't care much about the vendor). Ti models tend to be very silly priced when new on the market.

If you play games like CP2077 with ray tracing the 2080ti comes in around the 3070 - the 1080ti falls a long way short. If you don't play games like that then another matter.

EDIT: Only really applies to people who bought the card on release and had a good run out of it - later adopters paid a lot of money for a card which relatively quickly became midrange.
 
Last edited:
If you play games like CP2077 with ray tracing the 2080ti comes in around the 3070 - the 1080ti falls a long way short. If you don't play games like that then another matter.

I don't consider RT in current games to be worth the hassle - it's still not a true full RT but usually just overblown reflections without proper physics (like in Watch Dogs with water puddles etc.) where everything is tuned to be super polished and super reflective. From artistic point of view, it makes things just look worse - because of said changes by artists, making everything super reflective just to show RT effects as otherwise most people wouldn't even notice them. That is NOT how we get more real looking games, as it is the opposite in my eyes. But we've seen that in the past with other effects - at first always super overblown and unrealistic, just to show the tech, but over few years they became more tamed, used more properly and just better. The same will happen with RT but by then we'll have next gen of GPUs and things will look different. Also, by then current Ampere or 6000 AMD series will not be able to run them with sensible FPS, I reckon.

Though, when I look at Riftbreaker, they used RT in a very nice way, actually improving the experience with dynamic light, shadows, GI etc. But they also used lots of other tech from current DX12 Ultimate and also posted a few nice blog entries explaining everything - how it works, why they implemented things this or that way, etc. Lots of good stuff to read and learn about proper use of these technologies. :) Also, FPS is really fine with full RT and without the need to use FSR.
 
I'm still trying to grasps this tech and Dlls. If you only game at 1080p what does this allow? Or is it essentially pointless at that res

At 1080p it blurs quite a bit and to be honest AMD will need to work on improvements for lower resolutions. I would say 1080p FSR is only useful for those who are on a low end GPU/APU and this is the only way to get the game to play with decent FPS.

If/When to enable FSR is going to be on a case by case basis though, even at 4K.
 
For actual raw throughput in path traced GI, etc. the AMD cards are a full 50% slower than Ampere - whether we will see games push those features to that extent within the useful lifecycle of these cards is another matter.

nVidia also seem to have a internal DLSS model trained on ray tracing (i.e. for games which are fully path traced - not just stuff like CP2077) which is twice as efficient as the publicly released model so far (aside from some licensing issues I'm not sure why they've not made more of an effort to implement that in Quake 2 RTX - though there are some compatibility problems between the way Quake 2's source is licensed and the way DLSS is implemented and licensed which would require nVidia to release more of the source than it seems they want to).


https://www.hardwareluxx.de/index.p...-journey-mit-raytracing-und-dlss-im-test.html

6800/6900 about 60-70% slower than 3080/ti at may settings with RT.


I think in pure path tracer like Quake2 RTX you would see Nvidia GPUs pull 150-250% faster.
The problem with testing RTX in nodt games is you hot other bottlenecks thst hide the performance differences. Geometry and rasterization differences become more apparent. As with sll software optimization, if you make the slowest computation 2-3x faster then the next slowest becomes the bottleneck and the overall speed up is only 20-30% faster
 
https://imgsli.com/NTg5MzU/0/1
This is the disappointing thing with FSR and the ridiculous hype levels around it. Since it is largely just your standard bilinear/bicubic upscaler with some good sharpening on top, the main difference being better geometric edge detail extracted using the depth-map, you can just compare FSR to existing in-game linear scalers and get essentially the exact same results. Sure, if you were to look at 100% crops around certain objects FSR will look better but we are all told that you shouldn't be looking at small image crops to judge image quality....


Edit:
Or see this DOTA review, essentially since FSR is very similar to standard upscalers with a good sharpener, results don;t differ much. In this sense, Nvidia already offers FSr for all GPUs
https://www.youtube.com/watch?v=8s4Hc1URB50

I do think FSR will be a little better, some of the geometric edge details look really good. Perhaps Nvidia will just release an improved sharpener to quieten the FSR crowd.
 
Last edited:
https://imgsli.com/NTg5MzU/0/1
This is the disappointing thing with FSR and the ridiculous hype levels around it. Since it is largely just your standard bilinear/bicubic upscaler with some good sharpening on top, the main difference being better geometric edge detail extracted using the depth-map, you can just compare FSR to existing in-game linear scalers and get essentially the exact same results. Sure, if you were to look at 100% crops around certain objects FSR will look better but we are all told that you shouldn't be looking at small image crops to judge image quality....
Mostly disappointing only to you, Rroff and people that have no intention of using AMD hardware or FSR.

The FSR image also looks better. What were the FPS in that scene?
Someone wanted some videos of the temporal instability of FSR:
https://www.youtube.com/watch?v=UiiykNB1TdM&t=47s
I see that temporarly instability is also happening at native 1080P with FSR disabled in that game.
 
For actual raw throughput in path traced GI, etc. the AMD cards are a full 50% slower than Ampere - whether we will see games push those features to that extent within the useful lifecycle of these cards is another matter.

nVidia also seem to have a internal DLSS model trained on ray tracing (i.e. for games which are fully path traced - not just stuff like CP2077) which is twice as efficient as the publicly released model so far (aside from some licensing issues I'm not sure why they've not made more of an effort to implement that in Quake 2 RTX - though there are some compatibility problems between the way Quake 2's source is licensed and the way DLSS is implemented and licensed which would require nVidia to release more of the source than it seems they want to).
Are you going by 3dmark RT tests for those numbers? Also, the point about DLSS, do they really have something which is better than DLSS 2.2 in terms of performance. It seems a bad time to be keeping your cards close to your chest seeing as AMD is going to hog all the limelight for the next few months.
 
Nobody cares how hard it is to do. . People on forums go crazy when one gpu is better than the other and nobody cares how hard the teams worked to design their GPU. See this thread with people panning FSR. I'm sure amd worked hard to make it.


Intensive RT is not the same as good graphics. Good graphics also needs good design, art direction, models and textures. RT is technically optional because you can achieve good results with prebaked lights.

But hey if all you care about is pretty reflections then good on you.

Bit of a difference though if our time is lacking the required hardware capability which is stopping us from getting that extra 30/40+ fps across the board.... There is only so much that software features and optimisation can do.

Also, I wasn't just referring to ray tracing but also other extra graphical effects, which could also be pushed further now because of fsr/dlss.

"pretty reflections".... yes because that is all ray tracing is for, right :rolleyes:
 
We need a card to do for RT what the 9700Pro did for 16xAF and MSAA.

We do not have such a card yet so while RT is a nice tech that will be the future that is not today and no card today is going to be usable in that future.
 
FSR kills off dlss due to game developers wanted FSR.

If you set up two computers, and one game runs native and the other FSR and ask people to just play the game on both and not telling them about the FSR and such.

Then ask them, which one has native vs FSR they be unable to say which one.

Its due to how your eyes and brain see things and when you play a game your not watching pixels, your processing information and watching things in the center and then motion is processed outside center and one reason dynamic scaling has been tried to increase fps.

Your brain process information differently under different circumstances.
FSR and soon the next generation adding temporal upscaling (UE5) simply is the better choice for upscaling with for example ray tracing games.

DLSS is already dead, you just haven't figured that out yet.
 
I think in pure path tracer like Quake2 RTX you would see Nvidia GPUs pull 150-250% faster.
The problem with testing RTX in nodt games is you hot other bottlenecks thst hide the performance differences. Geometry and rasterization differences become more apparent. As with sll software optimization, if you make the slowest computation 2-3x faster then the next slowest becomes the bottleneck and the overall speed up is only 20-30% faster
Exactly, RT does not mean that raster is going away, you have to use various effects to make a scene in-game and not just RT, at least for now and foreseeable future.
 
https://www.hardwareluxx.de/index.p...-journey-mit-raytracing-und-dlss-im-test.html

6800/6900 about 60-70% slower than 3080/ti at may settings with RT.


I think in pure path tracer like Quake2 RTX you would see Nvidia GPUs pull 150-250% faster.
The problem with testing RTX in nodt games is you hot other bottlenecks thst hide the performance differences. Geometry and rasterization differences become more apparent. As with sll software optimization, if you make the slowest computation 2-3x faster then the next slowest becomes the bottleneck and the overall speed up is only 20-30% faster

It's actually not that hard to find statements by games' developers claiming what AMD also doesn't hide in their materials - RT for NVIDIA and for AMD require completely different approach by games' engine. If you just take RTX code and run it on AMD it will run, but really slowly (what you see in many NVIDIA optimised games). If you optimise the code for AMD - SOME RT operations will run faster on AMD than on NVIDIA, though in general outlook AMD will still be slower. Not by as much as most current games show, though. We'll see more AMD optimised games with time, as consoles push for RT as well, I reckon. The gap should be much smaller then.
A good current example of well optimised game for both platforms seem to be Riftbreaker - from the benchmarks I've seen 6800 is just behind 3070 level in RT 1440p (9% difference) and 6800XT not far behind 3080 1440p (19% difference). There's still a difference but it's not as big as some people claim it to be.

FSR kills off dlss due to game developers wanted FSR.

If you set up two computers, and one game runs native and the other FSR and ask people to just play the game on both and not telling them about the FSR and such.

Then ask them, which one has native vs FSR they be unable to say which one.

Its due to how your eyes and brain see things and when you play a game your not watching pixels, your processing information and watching things in the center and then motion is processed outside center and one reason dynamic scaling has been tried to increase fps.

Your brain process information differently under different circumstances.
FSR and soon the next generation adding temporal upscaling (UE5) simply is the better choice for upscaling with for example ray tracing games.

DLSS is already dead, you just haven't figured that out yet.

Fully agreed in regard to how our eyes and brain perceive moving images in games - most of the people wouldn't even be able to spot any difference between 120Hz and 240Hz, nor between RT on or off (as Linus and other proven in both cases). Upscaling is also pretty hard to spot unless seen literally side by side to see both at once - in such case our brains will see the difference. Which was also shown a few times and proven. This is why nobody on consoles complain about upscaling, even though checkerboarding can produce very visible artefacts - devs add lots of motion blurr to mask it, TV is far away from player usually, games aren't as crisp anyway on consoles.
It's the same even with audio - most people can't spot a difference between ok MP3 file and same file encoded in lossless format, unless they take a long time to analyse things side by side, on superb audio equipment.

However, I wouldn't underestimate NVIDIA's money that they are constantly pouring out into the market to make their tech be present in as many games as they can. Only when that stops devs might think twice if they want to waste time and money developing tech for only small percentage of the market, or use free tech that cost them almost no money and gives good enough effects plus works on everything. So far, history shown that people will chose cheaper and widely spread solution even if it produces interior effects - because most people DO NOT care one bit.

I'm still trying to grasps this tech and Dlls. If you only game at 1080p what does this allow? Or is it essentially pointless at that res

It works on 1080p but all tests so far shown that even on weaker GPUs (like APU) it's better to just lower details in games than use FSR OR DLSS - as both of these technologies work the better, the higher source resolution they get to work on. Ergo, 1440p seems to be absolute min. for good effects and 4k (and higher) seems ideal. In 1080p you'd get very blurred image, with artefacts, even on highest quality - as Hardware Unboxed shown recently on their vid. In such case FSR is still better than just playing in 720p but only if you put game on lowest settings already and it's still too slow. Which is a rare case.

https://imgsli.com/NTg5MzU/0/1
This is the disappointing thing with FSR and the ridiculous hype levels around it. Since it is largely just your standard bilinear/bicubic upscaler with some good sharpening on top, the main difference being better geometric edge detail extracted using the depth-map, you can just compare FSR to existing in-game linear scalers and get essentially the exact same results. Sure, if you were to look at 100% crops around certain objects FSR will look better but we are all told that you shouldn't be looking at small image crops to judge image quality....

That is just simply not true - a lot of reviewers did side by side comparison, with HUB even trying to use Photoshop's filters (various sharpening but not only) on standard upscaled image to make it look as good as FSR image and that failed. FSR is a much better upscaling algorithm than simple bilinear with sharpening, it's not even close. Also, if you haven't noticed, DOTA uses very simple graphics which just do not have that many details to compare - it looks very well even on lowest quality settings and the whole upscaling in it doesn't make any sense even on slowest GPUs on the market (as HUB shown in another test).
In the end, FSR is still just spatial upscaler and I am not sure why people expect it to create miracles?

That aside, most games do not even offer render scaling in their own options and if you just change the general resolution to lower you are also degrading GUI quality by far, in comparison to FSR, which works inside the engine and does NOT touch GUI elements.

Someone wanted some videos of the temporal instability of FSR:
https://www.youtube.com/watch?v=UiiykNB1TdM&t=47s

This is 1080p, even AMD themselves say FSR is NOT a good solution for 1080p. Both DLSS and FSR require, for really sensible quality, min 1440p final resolution and min. 1080p source resolution. And both of these work best with 4k and higher, with min. 1440p source resolution. This is just the way upscalers work (and always have), hence I don't know what your point is?
 
Last edited by a moderator:
Back
Top Bottom