• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do gamers actually think about Ray-Tracing?

As far as I'm aware, exact mirrors are much easier to calculate in RT (very simple angles, all the same, relatively easy to simplify things etc.), hence they do it to recover some FPS and people like bling. When you get to rough surfaces with reflectivity, amount of work needed to calculate that is much higher - it's the opposite to what people imagine about it. Similar thing to moving water - it's bad with current algorithms which work on temporal accumulation to deal with noise etc. and when it's moving all the time it messes that up. Hence you get bad quality or water is just flat unmoving surface (puddles etc.) as that's again much faster to calculate.

In the end, Jensen himself repeats in most interviews that GPUs are too slow to calculate all the fancy graphics and only AI can help - like with reflections and GI in CP2077. Barely any games use that, though, hence you can see loads of shortcuts taken.
You have waves and water reacting to you in CB2077 and somewhat in TW3 with RT. The puddles not reacting is just devs not giving enough attention to that.
 
Eh, define "raster"? A lot of the times raster is understood by common person as baked in lights and the likes. Often times he's talking more or less about RT hybrid approach - use bits of RT and mix with other algorithms. I wouldn't call voxel based GI as "raster" - not that long ago NVIDIA was pushing it as their own great solution to super realistic lighting in games, but then decided hardware RT will sell their GPUs better (as competition couldn't run it at that point), so focused on that instead. Both methods use RT at their core, just with different approach. There's plenty of other examples like that. The main difference is performance - the RT pushed by NVIDIA is horribly inefficient and won't run well on hardware like 4060 for example, whereas the alternative will without issues.
You're not. You're limited by performance of what is available to a common gamer. Raster or RT or PT is just math - the faster you can do computation the more fancy way of calculating light you can use. That's all there is to it. Raster, at its core, is a very simplified RT already and there's nothing in the GPU itself that prefers one over the other - it's all just math.
Because Metro is a very well optimised game where devs really cared for what they do and how they do it. It uses very little RAM and vRAM and works on large variety of hardware. This is a benchmark of what is possible - modern games are WAY worse in that regard.
That's his point too - these are tools that come off the shelf in UE, hence everyone just turns them on and that's it. No tweaking done, no settings adjusted even, all default as is in pretty much all UE5 games. Nanite is much better than just slapping all cray high polygons meshes on the screen, but it's much worse than just optimised meshes. Lumen "just works" but that also sounds exactly as NVIDIA's selling point for high-end GPUs - it just works, the more you buy... :) So who is it really serving, the players OR the lazy devs? Games cost more and more, but they use less and less time to actually use the engine properly, optimise games, etc. We get games that most gamers can't even play properly anymore and consoles are back to the 30FPS worlds - total regression. And don't tell me games with 30FPS are of higher fidelity. :P

He does address it in his videos, he even shows in places why this happens. And yes, it's all fault of the devs for just using all stock settings as they come. Also, he underlines huge issue (huge to me as well) with blurry image caused by ever-preset TAA on all effects, which is independent of DLSS/DLAA and used to mask all kinds of issues, then not even implemented properly (again just off the shelf).

In RT/PT you trace the rays from the camera into the scene or from a source of light, you bounce them off the scene until it reaches the camera (the ones that do). What falls outside is "raster" or not RT/PT.

It is still math, but simplified with less accurate results. You can't get proper reflections without RT (I'm only aware of RTT which basically means rendering the scene twice/mirror). Plenty of shadows are off, not all lights casting shadows, etc.

Yes, because Metro IS very optimized in its approach, it SHOULD be something to start from, not ignore it. That should be the Bible and he ignores it.

nVIDIA is not necessarily wrong with "it just works". Is wrong pricing the hardware very high and offering quite little on the low end. Devs are fault for not making the best implementation that current tech allows. It's a mix.
 
You have waves and water reacting to you in CB2077 and somewhat in TW3 with RT. The puddles not reacting is just devs not giving enough attention to that.
TW3 water is much smoother than raster one, though. As I've read devs saying on one of the Polish forums, it was done for that specific reason - too big waves were breaking RT.
 
That would be great, way more interesting than also non-devs people like DF and the likes. It's ok to look at final product and say "I like this or that better", compare screenshots etc. but I don't understand why some people consider them as some experts in the field, where in reality seem to be at best on the level of average enthusiast gamers.
He does show the good AA, in other videos, with live examples in movement. It's actually just tweaked TAA, which always had the bad press that was caused mostly by really bad implementation in UE4 engine and not by the tech itself. That said, DLSS and the likes are really also TAA with slapped "AI" on top (more on that later). Also, he clearly put on the graph "AT LEAST 1080p, better quality AA etc." as base resolution to aim at, instead of upscaled from 840p+ like one often gets with DLSS these days - that was the point, not to limit visuals to 1080p. You would get better upscaling form good 1080p than crap 864p, is the point too. As in, the opposite of what you thought he proposed.
I am not sure where you got that idea from, this isn't what he said at all and it wasn't even his point. He was just comparing that native 4k isn't necessary for playing on TV for various reasons (same as 4k movies, that most people can't even see any difference as they don't have big enough TV, close enough to their sofa, to actually be able to see the difference), where you can get very similar level of details in 1080p with proper AA and then upscaled to 4k as needed. Again, this is in comparison to upscaling from 864p like a lot of games do (especially on consoles) these days.
Again, not what he was talking about, as I described above. When you have proper FPS in 1080p, with good AA and amount of details, then you can do whatever you want with the image - leave it as is, upscale in various methods etc. But the goal should be to make the source image as good and fluid as possible first and then worry about fluff later.
Consoles and xx60 series cards are huge majority of the market by far. 4070Ti+ is a tiny fraction of the market in comparison, simply because of the pricing. That said, they test everything on the xx60 series as that's about console speed and if you can make game work well for this largest chunk of the market, higher-end GPUs will give you even more FPS and higher resolution. Sounds logical to me. Then, you can later add more fluff on top for higher end users, but that's an extra. However, when you aim at 4090 as your main target and then game works really badly on xx60 cards, you failed as a dev to target the majority of the market. And that seems to be the point here. Also, don't forget what happened to 3080 10GB and that it already suffers in games where weaker GPU with more vRAM can get better results in same settings - it wouldn't be a problem if games were better optimised.
I do play at times on a console, my wife mostly plays on the console. Neither of us would touch 30FPS with a 10 foot pole... It's just a horrible experience, making me physically sick. That publishers seem to be pushing 30FPS as the new meta (total regression) doesn't mean it's good for players nor that they like it. Usually games give option to have more bling in 30FPS or proper 60FPS with worse lighting - all the stats I've seen show that huge majority of players go for 60FPS and ignore the bling.
AMD came up with very good AO methods, along with bunch of other things - all open tech, available for years now. There's a reason many indie devs go for AMD tech like SSR, AO, upscaling etc. - it's just easily available for them to tweak and implement as desired instead of NVIDIA black box approach.
Again, performance matters - HW one on 3060 would be completely unplayable, so might as well not exist at all.
He's showing TAA is everywhere in games, on all kinds of effects, masking artefacts. You can't disable it in settings, it makes whole image very blurry and that's before one even adds upscaling (DLSS and other) and/or AA. Before his vids I wasn't even aware TAA is so widespread in games just not where we would expect it to be (as in to do actual AA). And it also explained to me why everything is so blurry these days in games - I though it's DLAA, DLSS, but turning them off changed nothing, etc. but now I know why. And it's not just my old eyes. :D
Shadows would be rendered using different method, this is just about GI. It lagging isn't any different (it actually is often better) than GI in CP2077 with PT - that one can lag like hell without using AI and even with AI it's far from perfect. Very disturbing when looking at it, as my brain knows this is just wrong. :)
It's also a relatively old game (8 years and ticking), but the whole point is that it was very cheap performance wise and still looked great for that hardware level from that time. It doesn't mean it can't be improved nor tweaked further, hardware got faster since that time for sure.
And likely won't be in AAA world, as that would require them to do something else than enabling Lumen - it cost money, so won't be done. Unless something finally gives in that world, which doesn't seem far off considering their games are flopping left and right these days, financially.
I don't believe this was his point at all :) Also, this whole vid was more of a quick bullet points, without very many details. Comments underneath seem to be full of devs by the way, quite a few said they just learned something new about methods they never heard about before - AAA dev world is just like any other IT world I work with daily, people just simply don't research and don't realise there are more than one way of doing things. I see it daily, makes me sad at times how closed minded many devs and other IT people are. Usually takes someone unusual, a genius, to push things forth - like J. Carmack in the past and Doom engine creation in very clever ways, using old math equations that most people didn't even know about.
This whole AI upscaling is very misleading though, isn't it? It's really just TAA with AI convolution used for better frame matching and removing temporal artefacts (which it still fails to do now and then with ghosting and other artefacts still visible in places even on newest version of it). People imagine AI in it is filling in blanks, adding missing details etc. - but that is not what it's doing there at all, as per NVIDIA's own papers. Which is why it barely uses tensor cores even on 3060, as is, because it has very little to do in the whole process. Ergo, it's marketing more than anything of real value.
DF, I believe, tend to speak about what's in use now and focus less on what it "could" be. Frankly, it would nice in principle, but just an imagination exercise since devs don't implement what's available and doable at the moment - regardless if it's RT/PT related or not.

Tweaking TAA, to me, it seems to be just akin to choosing different setups for DLSS, not readily available to the use unless you go with a separate program like DLSS Tweaks. The real test will be if it really holds just as well as DLSS when you set the resolution to 50% or thereabouts instead of a "clean" native resolution or a relative modest upscaling ration (like 0.8-0.9 or so). So in terms of upscaling, if you use something like FSR on lower resolution or on more aggressive setting, it will suck. Anyway, if he's having a general fix for TAA, then it can continue to use it for fixing other areas where it is applied (on other effects) and perhaps make a fortune by selling the ide to AMD (pretty cool to be more than their entire FSR/graphics software team!) :)

There's no "going back to 30fps", that was pretty standard for last gen and current gen will fall in line as it gets closer to its life cycle's end. It would have gotten sooner if studios would actually push something new instead of remakes and games made for old gen too.

Snowdrop from The Division evolved into a full RT engine (the Avatar Game). If you separate shadows and GI you still end up short at some point.

To me he just seems ****** at Epic. If it is THAT bad, just drop it and go for Cry Engine, Unity, etc. Why waste precious time on a fundamentally broken engine?

With that said, I'm waiting to see what he's actually offering - besides words.
 
TW3 water is much smoother than raster one, though. As I've read devs saying on one of the Polish forums, it was done for that specific reason - too big waves were breaking RT.
Considering the implementation is rather superficial in that game,, I'm not surprised. Nevertheless, is still not "flat". :)
 
DF, I believe, tend to speak about what's in use now and focus less on what it "could" be. Frankly, it would nice in principle, but just an imagination exercise since devs don't implement what's available and doable at the moment - regardless if it's RT/PT related or not.
That is the issue though, isn't it? We KNOW things can be done differently, often much more efficiently, but we have one big corporation who decided this is the best way to sell hardware and just pushed it forth without much consideration for the cost for gamers. Then we get results like shown in HUB (they very nicely shown all the points I brought up about noise, blur etc.). And we KNOW hardware can't get fast enough to push current RT/PT methods much faster, so something has to give.

Tweaking TAA, to me, it seems to be just akin to choosing different setups for DLSS, not readily available to the use unless you go with a separate program like DLSS Tweaks.
But that is not for end user to tweak, was their point too - it's for the devs to do, not just click "Enable" in the editor. Even editors have few presets, but most devs seem to not even bother with these. That is pure laziness and/or cost cutting steming from pencil-pushers.

The real test will be if it really holds just as well as DLSS when you set the resolution to 50% or thereabouts instead of a "clean" native resolution or a relative modest upscaling ration (like 0.8-0.9 or so). So in terms of upscaling, if you use something like FSR on lower resolution or on more aggressive setting, it will suck. Anyway, if he's having a general fix for TAA, then it can continue to use it for fixing other areas where it is applied (on other effects) and perhaps make a fortune by selling the ide to AMD (pretty cool to be more than their entire FSR/graphics software team!) :)
His point seems to be more of a - use TAA for what it was created to do, which is AA. It can be set to work very well with it, using just 2 frames. But it's not how it is used these days - they use it to mask issues they themselves created in raster, where instead of AA it is turned into more of a general temporal accumulation algorithm - that creates blurry image and adds to the ghosting. But it wasn't necessary to exist in the first place. RT is a different beast all together here.

There's no "going back to 30fps", that was pretty standard for last gen and current gen will fall in line as it gets closer to its life cycle's end. It would have gotten sooner if studios would actually push something new instead of remakes and games made for old gen too.
For a few console generations the big marketing thing was "60FPS!", then "4k!" just after, but then RT craziness came, which consoles are way underpowered to do (as even much faster GPUs are) and we got regression. Gladly, in most games on consoles one can choose 30FPS with more bling, or 60FPS with a bit worse graphics - I've not found a single game on my xbox where 30FPS would be better, so far. It's more of a marketing rubbish than actually liked by players thing, I would say.

Snowdrop from The Division evolved into a full RT engine (the Avatar Game). If you separate shadows and GI you still end up short at some point.
Most people seem to be noticing reflections of the RT first (as it's the easiest), then they move to GI (most things in real life aren't very reflective anyway) as that is the main advantage of RT, whilst shadows hardly anyone notices nor cares it seems. Personally I am all about GI, care little for reflections and shadows don't matter at all (raster can be done good enough with it as is). But that's me. :) Sadly, currently GI also reveals huge amount of noise/lag issues.

To me he just seems ****** at Epic. If it is THAT bad, just drop it and go for Cry Engine, Unity, etc. Why waste precious time on a fundamentally broken engine?
He explained why it has to be UE for them and it is/will be for most studios - all the 3rd party engines are getting ditched one by one for a while now, as studios move to UE5+. It's very well documented, it's what people that graduate know how to use (new junior employees), it has huge amount of assets, guides, tutorials online etc. There's just no better choice by far - all other engines are either very limited or just nowhere near as popular and supported as UE5 is. One could say Epic becomes a monopoly rapidly with their engine.

With that said, I'm waiting to see what he's actually offering - besides words.
That would be nice, yes. Though he (or his whole group) seem to be one of the bigger UE engine contributors currently, judging by UE forums etc.
 
Hardware RT performance still needs to catch up though,
Arguably it never needed to catch up as evidenced by Indiana Jones as with hardware ray tracing as the only RT method out of the box, it flies through frames. It's only path tracing that tanks the fps.

I think people have become so used to other game engines doing RT so poorly that this is the view of HWRT from a masses pov.
 
I don't pixel peep, I talk about the need to zoom in 400% to see something, yes, but I don't base my gaming experience on pixel peeping like they do. Again, I follow Alex's trajectory on how good RT/game rendering is based on several technical factors which I also talk about. If people misunderstand that simple concept, then that's an issyou, not an issme :p
 
Last edited:
I talk about the need to zoom in 400% to see something, yes
giphy.gif


;) :D
 
Back
Top Bottom