• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do gamers actually think about Ray-Tracing?

I am not talking about the video Calin Banc commented about at all, it's not even been linked by me. I am talking about this channel: https://www.youtube.com/@ThreatInteractive - it's not a youtuber, they don't run youtube channel as the main job. It's founding member of a development group working on actual games plus also very active member of Unreal Engine community (easy to check) for quite a while now. Dude seem to know very well what he is talking about and his YouTube videos are just him showing in practice, with real examples and using proper tools frames generated by games in for example UE engine - not on screenshots or video but live in the editor, to show what actual game engine is doing exactly at any given moment and what it does to the visuals. It's not marketing talk for any given vendor either.

They themselves are working on a realistically looking FPS game and chose UE engine as it's the most popular one currently, then noticed how bad a lot of things are implemented in it. Example of UE5.5 - it processes complex cloud over 800% slower than UE5.4, and even though it's been reported to Epic in beta, they never fixed it. Loads of things like that kill FPS in modern games for no visual reasons - just for lazy reasons. But people like DF, who have NO clue about actual development of games, only see visuals and assume that is how it is and that's it, completely ignoring that it's just a bad development choice, not necessity.

In other words, again, let's not confuse modern good visuals with just lazy development, bugs in UE code and bad (from performance standpoint) design and implementation in games. These are very different things.
I guess you're talking about this video.

I'm no game developer, but I'll love see some sitting around a round table talking what works and what not for them, esspecially about different engines.

With that said, if I get it right, he's proposing a 1080p image (leaving aside that he's talking about performance scaling with resolution and frame rate purely on TFLOPs, which don't really scale linearly), with what he calls "good" AA (but doesn't show it) and let the monitor/TV use its own upscaler (probably not the greatest) instead of something like DLSS which can be very good at upscaling from 1080p to 4k... Of course, he's advocating for consoles where normally you'll get FSR, but still, talking down DLSS, AI, hardware upgrades (PS5 Pro) - as in don't need them, leaving outside Series S and ignoring stronger hardware is a bit amusing to me. But hey, his game, his target - but that doesn't have to be the same for all!

Dismissing "stronger" cards like the 4070ti simply because he's focusing on consoles and GPUs around that power doesn't say much. By the time their game will see the light of day (if it will), 4070ti will probably fall into x60 series performance of that "current" gen or below... Sure, will probably be good for old consoles hw, but do we really care around here? And even console players seem to be "fine" by default with 30fps and whatever resolution the box can output...

Moving on, he talks about some techinics for a few games where you can see at Quantum Break how the AO artefacts around the character's head. One solution would be/was HBAO+... a close source technique, aka nVIDIA's tech if I'm not mistaken :D MXAO, on the other hand, seems like a nice option and Bayer in general for Lumen.

The Lumen talk... well, in software won't be great/ideal for the scene he's highlighting (or, it seems only logical, it would have been done alredy by others), also doesn't test it in HW. Playing Immortals of Aveum (which apparently has it), I'm falling to see all the issues he's talking about outside some minor noise on a very specific material and level. Ah, and let's not forget he's disabling other effects (TAA?) which makes it look that bad :))

Point The Division as an example I wouldn't say is ideal either since it looks decent if you look at it broadly, but still has the same issues with lacking shadows where they should be, poor reflections with the image reflected actually changing a second or two while looking at it and it's not even wildly accurate...

1st and 2nd pic with the shortcoming of the GI


Another fail from the GI where it ilumates way, way too much the scene... https://postimg.cc/G8dWtwNq

1st below is the originaly reflection,
2nd is the reflection it becomes after a while
3rd is what's actually behind the character and what it should have been reflected


Radiance Cascades seems possibly fine, but since he says it was usually done in 2D and only recently some 3D advancements have been done, of course it hasn't been talked before.

Then in closing, he shows Stalker 2 running and looking pretty bad without any AA (TAA, DLSS, DLAA, FSR) as a redit post. Well, that's on AMD's side to improve it's FSR. Looks great on DLSS!

"These problems won't be focused on or fixed by meanstream industry, because there is no market incetinve"... I mean, Sony launched PS5 Pro with AI upscaling...
 
@Tinek ,
I do get his points. Yes, you can get plenty of performance using other known or more experimental raster features. That's a given. But you're also limited by how raster works. Also, funny how he NEVER mentions it Metro Exodus while he goes on and on about other games...
Worth mentioning that staff able to optimize the last bit of performance out your engine isn't easy to find. Going for UE5 could be also a move to find talent easier, people with experience. Nanite and RT/PT help to streamline the gaming making process in their own way with their own pluses and minuses. Personally I do like them and appreciate.

What's I hated about UE was more about stutter and poor GPU usage at times - which he never addresses, so I guess are fixable by devs :))
 
Last edited:
Yes I am on specifically about UE5 (or UE in general) as it has never ending flaws which cannot be ignored other than by devs and Epic themselves typically, but point is games like Indiana Jones with id tech and a bunch of other in-house engines have demonstrated that ray tracing and path tracing can be high quality and high performance at the same time.
The last game I've seen with full RT and good performance that justify it, along with no accompanying issues (lagging GI, noise all over the place, very blurry image caused by TAA on most effects etc.) was Metro LL - that IS a benchmark for me of what's possible. More modern games, they turn on RT/PT and that's it. Are they technically better? No, in most cases it's a regression in my eyes - there's no better physics, no better AI, no more realistic RT effects, there's just much higher system requirements. Did RT suddenly become more computationally expensive or is it a case of lazy devs using off the shelf solution without actually doing any work themselves? From NVIDIA article about said Metro "Ordinarily, this level of detail would require gigabytes of system memory and GPU VRAM, but thanks to a highly efficient streaming system Last Light’s world uses less than 4GB of memory, and less than 2GB of VRAM, even at 2560x1440 with every setting enabled and maxed out.". I don't think I need to add more here. We're not moving forth with graphical fidelity, we're moving forth with laziness of devs and people like DF excusing it, instead of pushing for better development, along with blaming gamers for having too weak hardware. The audacity of that, when we know already current hardware is fast enough to handle it, just horribly utilised. Then 5k series arrives and what will we get? Even less FPS in next games, with same overall fidelity, I can bet. Because 30FPS is more cinematic? :)

The alternative is to allow devs to remain lazy and not enforce something like RTGI which then results in the stutterfest and/or high latency we see in every single UE5 powered game currently out on launch day with months of patching needed to maybe fix some of the technical issues. ONly about 3 games in memory didn't launch with massive issues, and those were indy titles as well vs the mess from AAA publishers/devs that often see release.
This is exactly why the linked by me devs are working on their own UE5 version (let's call it fork), which is said to fix all kinds of such issues, along with increasing performance and giving devs proper tools to optimise their games. They even explain why UE5 is so badly done - it's designed for Fortnight and it's very visible, hence it works great there, on that type of gameplay and graphics. It's also designed for Hollywood where performance doesn't matter as much as the fidelity in specific use. Any time you put it to use on a different type of game and it starts showing all kinds of issues. However, it's also a very popular, very well documented engine, hence most devs are moving to it and we will see more and more games on it. This can't be avoided, but one can fix all the issues with it, which some games already implement (not many yet).

RT is not the problem here, the engine and time dedicated to fixing the issues in the engine by a dev is.
The thing is RT is great but using it to render ALL light in the whole game is just very inefficient thing to do in games. Especially in static games, with no moving lights, no destruction, no physics - you can achieve very good quality GI in many other methods (many based on voxels and pushed previously by NVIDIA but then ditched because RTX happened and 2k series wasn't selling). None of them mean baked in lights, it just means you don't need full RT to do the same job where much more efficient methods with near identical outcome are available for a while now. The problem is they're not given to devs on a plate like Lumen is - so devs just turn on Lumen and don't bother with anything else. And Lumen is often far from ideal, especially the software one.

I have not seen the game Threat Interactive are actually working, on, they talk a big game but what have they demonstrated other than breaking down already existing games? Their channel has nothing other than pointing out the flaws in Unreal which is fine, but if they are working on some super new FPS game then where are the improvements they claim to have being shown off?
Channel is made for laymen, all the technical bits and bobs they post on UE forums, where they often push on Epic for changes or with bug reports etc. They're also not the only group that puts forth mentioned by me things, but they do show with details what can be done in UE if you dive a bit deeper as a dev instead of just clicking Lumen ON. In their videos you can find examples of specific effects and algorithms that are good in specific use. And they did not reinvent a wheel here - this is all knowledge that devs in the past had and used, like in mentioned by me Metro LL. And then cost cutting happened and it stopped.

They don't even have a website, just a wordpress landing page.
It's irrelevant to the topic, though, isn't it? :) Again, there's no theory talk here, they have exact examples with exact names of algorithms used in specific use and then compared to the PT rendering of same scene as a "true" image, to be sure there's little to no visual difference. They shown very well not just UE flaws but what devs do as shortcuts in all kind of games and why there's for example no real transparency used in games anymore but instead cheap effects with horrible artefacts masked by ever-present TAA, which you can't turn off and which makes whole image very fuzzy/blurry. It's very well visible in a lot of games these days. They even shown side by side how you can tweak TAA in UE and get better quality AA (especially in movement) than with DLAA. Ergo, it's a visual evidence of what can be achieved if one tries VS marketing BS you are talking about. :)
I will continue to stand by devs like Machine Games, Remedy, CDPR etc who clearly know what they are doing with modern tech like RT/PT until others demonstrate that what they have is superior.
That's your choice but you're comparing non-allied devs showing detailed things in game engines vs marketing talks of corporations. I know whom I trust more, especially with all the visual, detailed evidence. Also, do not forget about Metro LL devs, which I quoted at the top of this post. Can you imagine running modern game looking like Metro LL on 2GB vRAM? :)
 
It's not irrelevant because all they have done is made claims and sounded convincing to some, yet have actually shown no project demonstrating what they claim to be true.

Tag me when this proof emerges please. Until then I buy not a single word of it.

We have proof in titles that RT and PT are great, they have been shown in a bunch of games to date, and now we are seeing RT being the default for GI at the baseline, and that's not even in UE powered engines. Yet we still await this magic raster rendering method that outsmarts RT/PT, instead we have pockets of the online community who make claims and not really show anything to back up the claims.

Like I said, show the proof, and the followers will come flocking, until then, it's just eFodder.
 
Last edited:
@Tinek ,
I do get his points. Yes, you can get plenty of performance using other known or more experimental raster features.
Eh, define "raster"? A lot of the times raster is understood by common person as baked in lights and the likes. Often times he's talking more or less about RT hybrid approach - use bits of RT and mix with other algorithms. I wouldn't call voxel based GI as "raster" - not that long ago NVIDIA was pushing it as their own great solution to super realistic lighting in games, but then decided hardware RT will sell their GPUs better (as competition couldn't run it at that point), so focused on that instead. Both methods use RT at their core, just with different approach. There's plenty of other examples like that. The main difference is performance - the RT pushed by NVIDIA is horribly inefficient and won't run well on hardware like 4060 for example, whereas the alternative will without issues.

That's a given. But you're also limited by how raster works.
You're not. You're limited by performance of what is available to a common gamer. Raster or RT or PT is just math - the faster you can do computation the more fancy way of calculating light you can use. That's all there is to it. Raster, at its core, is a very simplified RT already and there's nothing in the GPU itself that prefers one over the other - it's all just math.

Also, funny how he NEVER mentions it Metro Exodus while he goes on and on about other games...
Because Metro is a very well optimised game where devs really cared for what they do and how they do it. It uses very little RAM and vRAM and works on large variety of hardware. This is a benchmark of what is possible - modern games are WAY worse in that regard.

Worth mentioning that staff able to optimize the last bit of performance out your engine isn't easy to find. Going for UE5 could be also a move to find talent easier, people with experience. Nanite and RT/PT help to streamline the gaming making process in their own way with their own pluses and minuses. Personally I do like them and appreciate.
That's his point too - these are tools that come off the shelf in UE, hence everyone just turns them on and that's it. No tweaking done, no settings adjusted even, all default as is in pretty much all UE5 games. Nanite is much better than just slapping all cray high polygons meshes on the screen, but it's much worse than just optimised meshes. Lumen "just works" but that also sounds exactly as NVIDIA's selling point for high-end GPUs - it just works, the more you buy... :) So who is it really serving, the players OR the lazy devs? Games cost more and more, but they use less and less time to actually use the engine properly, optimise games, etc. We get games that most gamers can't even play properly anymore and consoles are back to the 30FPS worlds - total regression. And don't tell me games with 30FPS are of higher fidelity. :P

What's I hated about UE was more about stutter and poor GPU usage at times - which he never addresses, so I guess are fixable by devs :))
He does address it in his videos, he even shows in places why this happens. And yes, it's all fault of the devs for just using all stock settings as they come. Also, he underlines huge issue (huge to me as well) with blurry image caused by ever-preset TAA on all effects, which is independent of DLSS/DLAA and used to mask all kinds of issues, then not even implemented properly (again just off the shelf).
 
It's not irrelevant because all they have done is made claims and sounded convincing to some, yet have actually shown no project demonstrating what they claim to be true.

Tag me when this proof emerges please. Until then I buy not a single word of it.
Or you could just actually watch their videos and see for yourself, as you clearly didn't - aside their message, it's actually very interesting to see how much each effect takes time in each frame etc. But that might be just me, I like the technical side not just blinky lights. :) That said, this is just one group, one example. Another example, with actually released games, is EXOR Studios with their game The Riftbreaker and a bunch of their blogs about development, showing in details all kinds of optimisations they did and why they never used RT for whole thing, just in places it made sense - so performance of their game is still very good on variety of hardware. Many more of such examples, but if you prefer PR videos instead of reality, sure, your choice.
We have proof in titles that RT and PT are great
I don't agree. It lowers considerably image clarity (sharpness) and often makes it less realistic. Lags in GI (completely unrealistic, which is very visible in CP2077 for example), horrible noise in any darker areas, blurry reflections (unless you use AI to fix it, which is still very rare in games), unrealistic water (caustics near impossible even in Hollywood) and generally very fuzzy/blurry image (not because of upscaling). It all makes it very unappealing to me the moment I look a bit closer - it pretends to be realistic and that works great on screenshots. In movement, with added ghosting and other artefacts caused by TAA, denoising and DLSS slapped on top, it's even worse. The more games with RT I see the more I feel like we took a bad turn, way before hardware was ready. It's far from perfect, it's just not great, it's far from CGI in movies and generally not very liked by gamers as various surveys show over and over again - as it's simply form over function.

they have been shown in a bunch of games to date
And how many of these games have actually been successful and made good monies? All I see is AAA losing monies left and right (for various reasons). CP2077 is a good game, irrelevant of graphics. Most other aren't.

, and now we are seeing RT being the default for GI at the baseline, and that's not even in UE powered engines.
Can't wait for the sale numbers :) Even though it caries a known Bethesda name, number of players on Steam doesn't look stellar, on xbox series x it also sits way below Hogwarts currently (and many other games) etc. It might well turn out to be a total flop financially, like many other AAA games these days.

Yet we still await this magic raster rendering method that outsmarts RT/PT, instead we have pockets of the online community who make claims and not really show anything to back up the claims.
So, promoted heavily by NVIDIA voxel based GI is just raster? Or perhaps just different approach to RT? I could swear NVIDIA said their VXGI engine can pass The Cornell Box test (I am kidding, they of course said that, it's still on their website, as I just checked to confirm), which is a direct comparison to real photo of same scene. Ergo, NVIDIA claim it's as close to reality GI as possible, already. Suddenly it can't be used anymore? Wait, maybe because it could run on GTX 970 series and that wouldn't be a good selling point for RTX series? :O I can see more examples like that but by now you should get the point. Again, RT/PT are the lazy way for devs to cut cost down instead of them actually trying to get good performance - as NVIDIA themselves shown a bunch of years ago, before RTX became their main selling point and everything else got forgotten. Just you wait for them to go full AI for gaming and you'll see how quickly RT/PT gets forgotten as the "old, inefficient tech".
Like I said, show the proof, and the followers will come flocking, until then, it's just eFodder.
NVIDIA.com :) They did it 10 years ago. People just forgot and new thing is always better... just for whom exactly? :)
 
6edeMgK.jpeg
 
This would all be solved if both companies didn't do shrinkflation this generation.What we should have had at launch by pushing both ranges one tier downwards:
1.)RTX4070TI 12GB = RTX4070TI 12GB for £550 to £600
2.)RX7900XT 20GB = RX7800XT 20GB for around £550
3.)RTX4070 Super 12GB = RTX4070 12GB for £480 to £500
4.)RX7900GRE 16GB = RX7800 16GB for around £480
5.)RTX4070 12GB = RTX4060TI 12GB for £380 to £400
6.)RX7800XT 16GB = RX7700XT 16GB for £380 to £400
7.)RTX4060TI 16GB=RTX4060 16GB for £280 to £300
8.)RX7700XT 12GB=RX7600XT 12GB for £280 to £300
9.)RTX4060 8GB = RTX4050TI 8GB for £200 to £220
10.)RX7600 8GB =RX7500XT 8GB for £200

Generational uplift over previous generation dGPUs(TPU and TH at 1080p):
1.)46% Rasterised and 57% RT improvement over the RTX3070TI and 25% more VRAM
2.)51.6% Rasterised and 60%% RT improvement over the RX6800 and 25% more VRAM
3.)44.4% Rasterised and 54.7% RT improvement over the RTX3070 and 25% more VRAM
4.)59% Rasterised and 69% RT improvement over an RX6700XT and 33% more VRAM
5.)46.4% Rasterised and 55.7% RT improvement over an RTX3060TI and 50% more VRAM
6.)45.4% Rasterised and 54.1% RT improvement over an RX6700XT and 33% more VRAM
7.)47.4% Rasterised and 48.5% RT improvement over the RTX3060 and 33% more VRAM
8.)48.6% Rasterised and 61.1% RT improvement over the RX6700 and 20% more VRAM.

You need at least a 40% improvement each generation,at an average generation life of two to two and a half years,to get a doubling of performance every 5 years. Ada Lovelace was the biggest generation improvement for Nvidia since Pascal. AMD was not as impressive,but it was still OK.

Even the above list,could have had more cut down models substituted instead to hit around 40% improvement.This generation was a good performance improvement overall but the awful pricing made it terrible.
 
Last edited:
I guess you're talking about this video.
Nah, actually all of them - each shows various things that are relevant to the conversation, like total lack of optimisation in many games these days, which are often actually easy to do. Or complete butchering of TAA settings, lack of real transparency in games, bunch of cheap effects masked by TAA making image blurry etc.

I'm no game developer, but I'll love see some sitting around a round table talking what works and what not for them, esspecially about different engines.
That would be great, way more interesting than also non-devs people like DF and the likes. It's ok to look at final product and say "I like this or that better", compare screenshots etc. but I don't understand why some people consider them as some experts in the field, where in reality seem to be at best on the level of average enthusiast gamers.

With that said, if I get it right, he's proposing a 1080p image (leaving aside that he's talking about performance scaling with resolution and frame rate purely on TFLOPs, which don't really scale linearly), with what he calls "good" AA (but doesn't show it)
He does show the good AA, in other videos, with live examples in movement. It's actually just tweaked TAA, which always had the bad press that was caused mostly by really bad implementation in UE4 engine and not by the tech itself. That said, DLSS and the likes are really also TAA with slapped "AI" on top (more on that later). Also, he clearly put on the graph "AT LEAST 1080p, better quality AA etc." as base resolution to aim at, instead of upscaled from 840p+ like one often gets with DLSS these days - that was the point, not to limit visuals to 1080p. You would get better upscaling form good 1080p than crap 864p, is the point too. As in, the opposite of what you thought he proposed.

and let the monitor/TV use its own upscaler (probably not the greatest) instead of something like DLSS which can be very good at upscaling from 1080p to 4k...
I am not sure where you got that idea from, this isn't what he said at all and it wasn't even his point. He was just comparing that native 4k isn't necessary for playing on TV for various reasons (same as 4k movies, that most people can't even see any difference as they don't have big enough TV, close enough to their sofa, to actually be able to see the difference), where you can get very similar level of details in 1080p with proper AA and then upscaled to 4k as needed. Again, this is in comparison to upscaling from 864p like a lot of games do (especially on consoles) these days.

Of course, he's advocating for consoles where normally you'll get FSR, but still, talking down DLSS, AI, hardware upgrades (PS5 Pro) - as in don't need them, leaving outside Series S and ignoring stronger hardware is a bit amusing to me. But hey, his game, his target - but that doesn't have to be the same for all!
Again, not what he was talking about, as I described above. When you have proper FPS in 1080p, with good AA and amount of details, then you can do whatever you want with the image - leave it as is, upscale in various methods etc. But the goal should be to make the source image as good and fluid as possible first and then worry about fluff later.

Dismissing "stronger" cards like the 4070ti simply because he's focusing on consoles and GPUs around that power doesn't say much.
Consoles and xx60 series cards are huge majority of the market by far. 4070Ti+ is a tiny fraction of the market in comparison, simply because of the pricing. That said, they test everything on the xx60 series as that's about console speed and if you can make game work well for this largest chunk of the market, higher-end GPUs will give you even more FPS and higher resolution. Sounds logical to me. Then, you can later add more fluff on top for higher end users, but that's an extra. However, when you aim at 4090 as your main target and then game works really badly on xx60 cards, you failed as a dev to target the majority of the market. And that seems to be the point here. Also, don't forget what happened to 3080 10GB and that it already suffers in games where weaker GPU with more vRAM can get better results in same settings - it wouldn't be a problem if games were better optimised.

By the time their game will see the light of day (if it will), 4070ti will probably fall into x60 series performance of that "current" gen or below... Sure, will probably be good for old consoles hw, but do we really care around here? And even console players seem to be "fine" by default with 30fps and whatever resolution the box can output...
I do play at times on a console, my wife mostly plays on the console. Neither of us would touch 30FPS with a 10 foot pole... It's just a horrible experience, making me physically sick. That publishers seem to be pushing 30FPS as the new meta (total regression) doesn't mean it's good for players nor that they like it. Usually games give option to have more bling in 30FPS or proper 60FPS with worse lighting - all the stats I've seen show that huge majority of players go for 60FPS and ignore the bling.

Moving on, he talks about some techinics for a few games where you can see at Quantum Break how the AO artefacts around the character's head. One solution would be/was HBAO+... a close source technique, aka nVIDIA's tech if I'm not mistaken :D MXAO, on the other hand, seems like a nice option and Bayer in general for Lumen.
AMD came up with very good AO methods, along with bunch of other things - all open tech, available for years now. There's a reason many indie devs go for AMD tech like SSR, AO, upscaling etc. - it's just easily available for them to tweak and implement as desired instead of NVIDIA black box approach.

The Lumen talk... well, in software won't be great/ideal for the scene he's highlighting (or, it seems only logical, it would have been done alredy by others), also doesn't test it in HW.
Again, performance matters - HW one on 3060 would be completely unplayable, so might as well not exist at all.

Playing Immortals of Aveum (which apparently has it), I'm falling to see all the issues he's talking about outside some minor noise on a very specific material and level. Ah, and let's not forget he's disabling other effects (TAA?) which makes it look that bad :))
He's showing TAA is everywhere in games, on all kinds of effects, masking artefacts. You can't disable it in settings, it makes whole image very blurry and that's before one even adds upscaling (DLSS and other) and/or AA. Before his vids I wasn't even aware TAA is so widespread in games just not where we would expect it to be (as in to do actual AA). And it also explained to me why everything is so blurry these days in games - I though it's DLAA, DLSS, but turning them off changed nothing, etc. but now I know why. And it's not just my old eyes. :D

Point The Division as an example I wouldn't say is ideal either since it looks decent if you look at it broadly, but still has the same issues with lacking shadows where they should be, poor reflections with the image reflected actually changing a second or two while looking at it and it's not even wildly accurate...
Shadows would be rendered using different method, this is just about GI. It lagging isn't any different (it actually is often better) than GI in CP2077 with PT - that one can lag like hell without using AI and even with AI it's far from perfect. Very disturbing when looking at it, as my brain knows this is just wrong. :)

1st and 2nd pic with the shortcoming of the GI


Another fail from the GI where it ilumates way, way too much the scene... https://postimg.cc/G8dWtwNq
It's also a relatively old game (8 years and ticking), but the whole point is that it was very cheap performance wise and still looked great for that hardware level from that time. It doesn't mean it can't be improved nor tweaked further, hardware got faster since that time for sure.

Radiance Cascades seems possibly fine, but since he says it was usually done in 2D and only recently some 3D advancements have been done, of course it hasn't been talked before.
And likely won't be in AAA world, as that would require them to do something else than enabling Lumen - it cost money, so won't be done. Unless something finally gives in that world, which doesn't seem far off considering their games are flopping left and right these days, financially.

Then in closing, he shows Stalker 2 running and looking pretty bad without any AA (TAA, DLSS, DLAA, FSR) as a redit post. Well, that's on AMD's side to improve it's FSR. Looks great on DLSS!
I don't believe this was his point at all :) Also, this whole vid was more of a quick bullet points, without very many details. Comments underneath seem to be full of devs by the way, quite a few said they just learned something new about methods they never heard about before - AAA dev world is just like any other IT world I work with daily, people just simply don't research and don't realise there are more than one way of doing things. I see it daily, makes me sad at times how closed minded many devs and other IT people are. Usually takes someone unusual, a genius, to push things forth - like J. Carmack in the past and Doom engine creation in very clever ways, using old math equations that most people didn't even know about.

"These problems won't be focused on or fixed by meanstream industry, because there is no market incetinve"... I mean, Sony launched PS5 Pro with AI upscaling...
This whole AI upscaling is very misleading though, isn't it? It's really just TAA with AI convolution used for better frame matching and removing temporal artefacts (which it still fails to do now and then with ghosting and other artefacts still visible in places even on newest version of it). People imagine AI in it is filling in blanks, adding missing details etc. - but that is not what it's doing there at all, as per NVIDIA's own papers. Which is why it barely uses tensor cores even on 3060, as is, because it has very little to do in the whole process. Ergo, it's marketing more than anything of real value.
 
Just a quick note, I don't believe I can keep up so many words per post in the future, just had a very boring day or two recently. :D Don't expect that to be a norm! :P
 
Last edited:
  • Haha
Reactions: TNA
Just a quick note, I don't believe I can keep up so many words per post in the future, just had a very boring day or two recently. :D Don't expect that to be a norm! :P

There used to be a user named drunkenmaster. He would posts walls of text all the time. You sir give him a run for his money.
 
Back
Top Bottom