• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ratchet and Clank: Rift Apart RDNA 2 Ray Tracing

Yeah, I touched on that in the same post.

Although this is true, it's (sadly) irrelevant in terms of devs making games.

For many, yes, and that's the sad part.
Look at Crysis 3, at the levels where grass was (which had a physics simulation applied to it) and it managed to show the power of multicore processors. Even old FX processors were as good as I7 of their times, crushing CPUs such as I3 which beat them in "normal" scenarios.

https://gamegpu.com/action-/-fps-/-tps/crysis-3-2013-retro-test-gpu-cpu

With that said, since the foundation of developing games is constant for (too) many years, then is hindering the development of games. Does this apply to current gen (Xbox series X and ps5)? In a smaller degree than previous gen, yes. Theoretically speaking, of course.

Anyway, RT aside, can't say that I've seen so far games that actually push games further. If the same old gamplay mechanics and AI are used still, just with some shinny stuff on top, then yeah, even older machines could have done ok as it is - just upgrade the storage.

PS: https://www.youtube.com/watch?v=gT_45RFFTx8
You could do thousands of AIs real time with ancinent HD 4xxx series. More than a decade later and we still have only a few (stupid) NPCs here and there. Unity vs Valhala is a joke when it comes to this.
 
16384 AIs (in that video), calculate their path towards their objectives, avoiding themselves, dynamically placed obstacles, congestion, while executing some tasks. New goals make them update their path while still avoid obstacles of different sorts.
Seems good enough
 
I wouldn't take CPY2077 as a neutral or even a somewhat ok example of RT for anything other than Nvidia cards. RT for that game was probably in development when RDNA2 was not even on paper (at least for us). Something like Metro EE would be an ok comparison for RDNA2 RT vs Ampere RT even though it is an Nvidia-sponsored game.
About RDNA2 being slower for RT, yeah that's true and I am all up for Nvidia cards being able to do more in RT titles but right now we don't have many examples where it is being given a fair chance (whatever might be the reasons). The one thing I have noticed is that Raster is not going away and when the scene demands both RT and raster AMD fares a little better given its lower RT performance. It's a situation like Hairwarks or Tessellation, sure they look good but having 64x Hairwroks or Tessellation to max (you can test in Unigine) does not improve IQ but just puts Nvidia in a better light.

It's tricky because DXR is the standard you can fall back to in order to do "fair" ray tracing in these games and specific support for tings like RTX which comes with some optimizations is going to provide better performance/experience. AMD doesn't really have an equivalent of this yet so fair testing in this sense would be demand that game developers artificially limit Nvidia cards. Due to sporadic use of things like RTX and whatever AMDs equivalent would be just makes benchmarking in a meaningful way really hard, even if one engine/game isn't a sponsored game, that's just inherent to have different systems.

It also heavily depends on workload as you say, and the relative balance of raster/RT demand at any one time. To benchmark just the RT portion of the card you kinda need something that's RT only like minecraft and that's when you see a kind of really raw RT test and big differences. Because those RT cores are fixed function and there's a limited amount of them as you lower the RT load in games the benefit of having more RT power starts to go away which is why you see smaller deltas between the cards on some games than others. Ideally what I'd like to see is more fine grain control over RT quality in games rather than just having low/med/high for example. Many of the RT effects can scale in a very granular way, how many samples you use for reflections and how many rays you cast per pixel for diffuse reflections. Simply exposing them to the user would allow hardware at different speeds to fully make use of the power, if you have it.

We're probably going to have reviewers adapt their benchmarking approaches to this stuff to incorporate the idea that cards might be getting bottlenecked due to RT loads, in sort of the same way they'd consider CPU bottlenecking.
 
It's tricky because DXR is the standard you can fall back to in order to do "fair" ray tracing in these games and specific support for tings like RTX which comes with some optimizations is going to provide better performance/experience. AMD doesn't really have an equivalent of this yet so fair testing in this sense would be demand that game developers artificially limit Nvidia cards. Due to sporadic use of things like RTX and whatever AMDs equivalent would be just makes benchmarking in a meaningful way really hard, even if one engine/game isn't a sponsored game, that's just inherent to have different systems.
You could make the argument that not using DXR is limiting AMD cards. But I get the point that devs would want to work with what is easy for them. Nvidia support to developers is of high quality and they directly help devs implement their tech in games. AMD has to do something similar. Just a bit of info about how bad AMD still is with regards to working with devs, there was a dev for one of the most famous benchmarks that I had a conversation with and they didn't have a 6900XT for testing even after a month of launch and had barely acquired 6800XT few days ago (at that time). I can't disclose more than that but that's just a shame. You need to provide your hardware to dev teams in plenty even if it for free so that they can optimize for your stuff
It also heavily depends on workload as you say, and the relative balance of raster/RT demand at any one time. To benchmark just the RT portion of the card you kinda need something that's RT only like minecraft and that's when you see a kind of really raw RT test and big differences. Because those RT cores are fixed function and there's a limited amount of them as you lower the RT load in games the benefit of having more RT power starts to go away which is why you see smaller deltas between the cards on some games than others. Ideally what I'd like to see is more fine grain control over RT quality in games rather than just having low/med/high for example. Many of the RT effects can scale in a very granular way, how many samples you use for reflections and how many rays you cast per pixel for diffuse reflections. Simply exposing them to the user would allow hardware at different speeds to fully make use of the power, if you have it.

We're probably going to have reviewers adapt their benchmarking approaches to this stuff to incorporate the idea that cards might be getting bottlenecked due to RT loads, in sort of the same way they'd consider CPU bottlenecking.
I think Metro EE already does that i.e allows decent control over RT effects for lighting at least. I am still waiting for a proper DXR game that showcases the max capabilities of both cards in a neutral and realistic way. That way we would know where AMD and Nvidia actually stand with regards to RT.
 
I have the game and while I would say it isn’t the best game I’ve ever seen it still looks and plays pretty amazingly for a £500 console, very fun game.

Shame it’s only 30fps with raytracing though.
 
Back
Top Bottom