https://www.hardwareluxx.de/index.p...-journey-mit-raytracing-und-dlss-im-test.html
6800/6900 about 60-70% slower than 3080/ti at may settings with RT.
I think in pure path tracer like Quake2 RTX you would see Nvidia GPUs pull 150-250% faster.
The problem with testing RTX in nodt games is you hot other bottlenecks thst hide the performance differences. Geometry and rasterization differences become more apparent. As with sll software optimization, if you make the slowest computation 2-3x faster then the next slowest becomes the bottleneck and the overall speed up is only 20-30% faster
It's actually not that hard to find statements by games' developers claiming what AMD also doesn't hide in their materials - RT for NVIDIA and for AMD require completely different approach by games' engine. If you just take RTX code and run it on AMD it will run, but really slowly (what you see in many NVIDIA optimised games). If you optimise the code for AMD - SOME RT operations will run faster on AMD than on NVIDIA, though in general outlook AMD will still be slower. Not by as much as most current games show, though. We'll see more AMD optimised games with time, as consoles push for RT as well, I reckon. The gap should be much smaller then.
A good current example of well optimised game for both platforms seem to be Riftbreaker - from the benchmarks I've seen 6800 is just behind 3070 level in RT 1440p (9% difference) and 6800XT not far behind 3080 1440p (19% difference). There's still a difference but it's not as big as some people claim it to be.
FSR kills off dlss due to game developers wanted FSR.
If you set up two computers, and one game runs native and the other FSR and ask people to just play the game on both and not telling them about the FSR and such.
Then ask them, which one has native vs FSR they be unable to say which one.
Its due to how your eyes and brain see things and when you play a game your not watching pixels, your processing information and watching things in the center and then motion is processed outside center and one reason dynamic scaling has been tried to increase fps.
Your brain process information differently under different circumstances.
FSR and soon the next generation adding temporal upscaling (UE5) simply is the better choice for upscaling with for example ray tracing games.
DLSS is already dead, you just haven't figured that out yet.
Fully agreed in regard to how our eyes and brain perceive moving images in games - most of the people wouldn't even be able to spot any difference between 120Hz and 240Hz, nor between RT on or off (as Linus and other proven in both cases). Upscaling is also pretty hard to spot unless seen literally side by side to see both at once - in such case our brains will see the difference. Which was also shown a few times and proven. This is why nobody on consoles complain about upscaling, even though checkerboarding can produce very visible artefacts - devs add lots of motion blurr to mask it, TV is far away from player usually, games aren't as crisp anyway on consoles.
It's the same even with audio - most people can't spot a difference between ok MP3 file and same file encoded in lossless format, unless they take a long time to analyse things side by side, on superb audio equipment.
However, I wouldn't underestimate NVIDIA's money that they are constantly pouring out into the market to make their tech be present in as many games as they can. Only when that stops devs might think twice if they want to waste time and money developing tech for only small percentage of the market, or use free tech that cost them almost no money and gives good enough effects plus works on everything. So far, history shown that people will chose cheaper and widely spread solution even if it produces interior effects - because most people DO NOT care one bit.
I'm still trying to grasps this tech and Dlls. If you only game at 1080p what does this allow? Or is it essentially pointless at that res
It works on 1080p but all tests so far shown that even on weaker GPUs (like APU) it's better to just lower details in games than use FSR OR DLSS - as both of these technologies work the better, the higher source resolution they get to work on. Ergo, 1440p seems to be absolute min. for good effects and 4k (and higher) seems ideal. In 1080p you'd get very blurred image, with artefacts, even on highest quality - as Hardware Unboxed shown recently on their vid. In such case FSR is still better than just playing in 720p but only if you put game on lowest settings already and it's still too slow. Which is a rare case.
https://imgsli.com/NTg5MzU/0/1
This is the disappointing thing with FSR and the ridiculous hype levels around it. Since it is largely just your standard bilinear/bicubic upscaler with some good sharpening on top, the main difference being better geometric edge detail extracted using the depth-map, you can just compare FSR to existing in-game linear scalers and get essentially the exact same results. Sure, if you were to look at 100% crops around certain objects FSR will look better but we are all told that you shouldn't be looking at small image crops to judge image quality....
That is just simply not true - a lot of reviewers did side by side comparison, with HUB even trying to use Photoshop's filters (various sharpening but not only) on standard upscaled image to make it look as good as FSR image and that failed. FSR is a much better upscaling algorithm than simple bilinear with sharpening, it's not even close. Also, if you haven't noticed, DOTA uses very simple graphics which just do not have that many details to compare - it looks very well even on lowest quality settings and the whole upscaling in it doesn't make any sense even on slowest GPUs on the market (as HUB shown in another test).
In the end, FSR is still just spatial upscaler and I am not sure why people expect it to create miracles?
That aside, most games do not even offer render scaling in their own options and if you just change the general resolution to lower you are also degrading GUI quality by far, in comparison to FSR, which works inside the engine and does NOT touch GUI elements.
Someone wanted some videos of the temporal instability of FSR:
https://www.youtube.com/watch?v=UiiykNB1TdM&t=47s
This is 1080p, even AMD themselves say FSR is NOT a good solution for 1080p. Both DLSS and FSR require, for really sensible quality, min 1440p final resolution and min. 1080p source resolution. And both of these work best with 4k and higher, with min. 1440p source resolution. This is just the way upscalers work (and always have), hence I don't know what your point is?