• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The RT Related Games, Benchmarks, Software, Etc Thread.

Status
Not open for further replies.
And it pretty much hit a brick wall. As John Carmack said himself in regard to realistic VR (RT being a part of it) - we need at least 50x speed increase to makes these things look actually real. But we can't get that, as Moor's Law is dead and we are already at the edge of what we can achieve on silicon. We can get few times faster at most and that's it. New tech will have to come and possibly not based on silicon - such tech doesn't even exist yet and it will take at least a decade to push something into mass production. Possibly longer. This is one of the reasons NVIDIA pushes DLSS 3 so hard - they see it as the only way forward to increase FPS because they can't do it with brute force (pure hardware solution) anymore.

In what way? In terms of hardware advancement yes that is true, as you said, hence the need for machine learning/alternatives such as upscaling and now frame generation.

From an adoption POV, the ball is rolling and picking up speed, we have seen a great deal of not only the current RT effects being used more often in both new and old games getting RT added in but a ton of new RT graphical effects have just been announced/released with nvidias announcement of the 40xx and cp overdrive RT mode and portal RTX, like BF 5 RT reflections, those will eventually trickle down into other games (and be better and more widely implemented) as well including optimisations for the 40xx hardware of supposedly being a 40% improvement in efficiency of RT workloads, time will tell how that goes and if those improvements happen to also be beneficial/applicable to other hardware than just 40xx hardware.

AMD are also improving their RT performance too:

RADV Radeon Vulkan Driver Continues To Improve Ray Tracing Performance For AMD GPUs


Sony and microsoft are pushing for it in their consoles/games as well.

The main problem/brick wall we still have is lots of people with non-rt capable hardware and primarily games still being made for ps 4/xbox one, once the latter stops and most of the market have RT capable hardware, there is no reason we couldn't move to having more titles like metro ee.
 
Last edited:
This is all good, but I'll repeat Carmack's words - to have in games actually real-looking images (ones you would have really hard time to distinguish from reality - which is the ultimate goal of RT), we need at least 50x the current performance. Neither hardware on silicon nor software can give us that (4k series gives up to 4 times the speed up, with all the trickery of DLSS 3, in best possible situation). That is not even in the same order of magnitude. And they just implemented into the GPU all newest tech taken from the CPUs (our of order processing, big cache, etc.). There's not much left to optimise, it will have to involve either brand new math (unlikely but not impossible) or very different technology all together. Is it possible? Sure, but not in a few years - these things take a long time to develop, test and implement. And by that cost a lot. DLSS 3 took NVIDIA (as per DF claim) about 7 years of constant development, to make it viable for release - it predates DLSS 2 actually, just took that long to develop.
 
This is all good, but I'll repeat Carmack's words - to have in games actually real-looking images (ones you would have really hard time to distinguish from reality - which is the ultimate goal of RT), we need at least 50x the current performance. Neither hardware on silicon nor software can give us that (4k series gives up to 4 times the speed up, with all the trickery of DLSS 3, in best possible situation). That is not even in the same order of magnitude. And they just implemented into the GPU all newest tech taken from the CPUs (our of order processing, big cache, etc.). There's not much left to optimise, it will have to involve either brand new math (unlikely but not impossible) or very different technology all together. Is it possible? Sure, but not in a few years - these things take a long time to develop, test and implement. And by that cost a lot. DLSS 3 took NVIDIA (as per DF claim) about 7 years of constant development, to make it viable for release - it predates DLSS 2 actually, just took that long to develop.

Obviously that is the end goal, which is a VERY long way of, probably fits with your 10-20 years timeline (well even then, still a bit too long given the ue 5 demos we have seen which I would argue is very close to photo realism), the point is we have seen with the very limited RT used in games we have "now", it looks better in most cases as we have seen from metro ee, a RT only title that current hardware is already capable of running only RT to some extent, again, metro ee proves this:

OusRdRy.png

That's without using upscaling tech @ 4k and not sacrificing settings either (except for hairworks since it tanked amd perf)

Then see consoles version.

The biggest problem is developers developing for both old gen and new gen ways.

Disagree on the software front, as posted, with new tech. it will require a learning period to get the best from it hence why nvidia have found other ways to improve their performance through software tweaks, same way amd are improving their RT performance on existing gpus, same way games are able to get more from it and same way Sony have found ways to get the most from RT e.g. ratchet and clank. Just look at bf 5 rt, it was awful, the performance hit for VERY limited RT reflections to RT reflections we see in games now where it looks better and is used more often AND runs much better, it's still very much a learning process as Microsoft have also stated.

Throwing more brute force/hardware power is not always the answer hence the machine learning/ai boom we are seeing in various industries and even this alone is still very much just the tip of the iceberg with what is possible.
 
AMD has been behind in RT hardware so they have clear path to improve it - easy gain, you could say. And by extension, consoles are in similar situation. After that, I am curious what they will come up with. RT is very difficult to accelerate more than we already have in hardware (sans pumping up clocks of each core, which can be difficult and power hungry). What you're saying is that using machine learning and other trickery we might overcome the issue of hardware - which might well happen, but again, that requires many years of development and lots of money. AMD just now has enough money for serious R&D hence they likely did not do much before. NVIDIA - maybe, they hid DLSS 3 very well over so many years, as even rumours about it did not really show up before presentation.

However, software has limitations too and machine learning is not a solution to all problems. DLSS 3 has drawbacks and I have the feeling independent tests will not be as kind to it as DF was. Even if/when AI can assist with things, it can't do it as fast as you imagine. Just today I've read about AI creating new way of multiplying matrices (that could be potentially used in GPUs, by the way), better than human algorithms. Not even devs and mathematicians that designed it understand what it did and how and why it's better this new way, as the method is very convoluted and counter intuitive for humans. Article claimed it will take few years to figure out how it works, and how to apply it in actual products like GPUs, server farms etc. However, this new algorithm, state of the art, best of the best AI creation so far, is still only 20% faster than what we had for a many generations now.

In summary, I am all up for RT and great adoption and that will happen, but the effects won't be what you imagine. Likely few sprinkled RT effects added on top of good old raster, with a bunch of trickery on top of it. However, before we might reach evolution of GPUs allowing image to be truly realistic, a completely different technology might gain upper hand for gaming and GPUs as we know them (in PC gaming especially) might not even be a thing anymore. Looking back on the history of technology, usually the next big thing is nothing that we imagined but a completely different thing all together. Like for example direct brain stimulation, using our own memories to create images for the game, instead of GPUs and RT and the likes, etc.
 
It will ultimately come down to what next gen consoles offer, they would be what another 5 years away? And I would fully expect devs. to drop the last gen consoles entirely then, which means no need to support rasterization any longer, I suspect the majority of PC gamers will also be on RT capable hardware by then too.

Sony have found a way to get more RT and applied for patent too so be interesting to see what they do:

 
Last edited:
we need at least 50x speed increase to makes these things look actually real.
That would be a fundamental mistake of orientation. The right goal is for things to feel (not look) real (and let the brain fill in the blanks), especially for VR. Besides, VR has way bigger problems than hardware power right now. They have to take Quest 2 & repeat the process another 10 years first, then they can worry about photo-realism and the like (cake before the cherry!). That's not why adoption is still slow for VR.

Regardless we're very far from reaching hardware limits. Hell one of the biggest things that are an easy win right now for RT is just pumping up the mem b/w like mad (we had HBM2 on consumer GPUs 4 years ago - only went away because they wanted to sell less for more, but that options is always there when needed). Besides that Imagination Tech has a nice little check-list for RT tiers where we can see how the GPUs can evolve (and hell, Intel just demonstrated how simple it is to just drop some potent hardware RT in there, even with an otherwise crappy product).
It's just that all the vendors are happy chasing margins and drip feeding improvements gen on gen. That's the real problem if we're talking about progress, but technologically there's a LOT left on the table that they can add at any time if we really needed it, but truth is we just don't as the market is still very price sensitive and bound by the 2020 consoles anyway. RT is a nice luxury on desktop for high-end buyers but it otherwise isn't a compelling enough feature to prompt the majority to chase more RT performance given current trends.
 
Last edited:
It will ultimately come down to what next gen consoles offer, they would be what another 5 years away? And I would fully expect devs. to drop the last gen consoles entirely then, which means no need to support rasterization any longer, I suspect the majority of PC gamers will also be on RT capable hardware by then too.

Sony have found a way to get more RT and applied for patent too so be interesting to see what they do:


This is the stuff I like to see, I'm all for RT but studios, Devs etc... need to find ways to optimise it and not simply count on the brute force approach of "MORE HARDWARE !!!!" :D
 
And it pretty much hit a brick wall. As John Carmack said himself in regard to realistic VR (RT being a part of it) - we need at least 50x speed increase to makes these things look actually real. But we can't get that, as Moor's Law is dead and we are already at the edge of what we can achieve on silicon. We can get few times faster at most and that's it. New tech will have to come and possibly not based on silicon - such tech doesn't even exist yet and it will take at least a decade to push something into mass production. Possibly longer. This is one of the reasons NVIDIA pushes DLSS 3 so hard - they see it as the only way forward to increase FPS because they can't do it with brute force (pure hardware solution) anymore. They also use that (as one of the reasons) as an excuse for hiking up pricing. The only other way forth would be even bigger chips, bigger GPUs in effect and way more power use - and consumers will not buy that. I highly doubt 4k series will sell well for the current prices at least.

Let's see the studios bother to create next gen games first, then worry about the hardware side of things. There's so many cool stuff that can be done and feel "real". So far all we get are remakes and same ol' style of games, but "newer".
 
That's with DLSS off remember, and 1080p is CPU centric res too, I expect 1440p to be much more favourable to the GPU and once DLSS is on, we're talking over 60fps anyway either res.

Does look great though.
 
Plagues tale requiem asks for rtx3070 at 1080p 60fps
I think that is without any ray tracing potentially? Look at the additional notes:

zO5Dv9H.png

That and being nvidia sponsored and all the RT effects, they would have put a 6900xt instead I think like other games have done when it came to RT requirements.
 
Last edited:
Remember this is a current gen game, no ps4 etc included, so their fps target is either 30fps or 60fps constant, for that and given that RTGI alone adds a big overhead without DLSS enabled which is what these specs refer to, it makes sense.

Those on 3070 Ti and above will no doubt get 60fps+ with DLSS enabled at 1440p I would educate a guess on going by past experiences with RTX games.

Also keep in mind that this game looks incredible, looks more like an Unreal Engine 5 game in many ways, even though it is using a proprietary engine.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom