• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
End of the day you will only notice low res RT if it's on a static feature that is in view such as a car interior in GT7 whereas in R&C you will only notice if you pause and pixel peep since is a fluid, fast-moving game.

I find this whole thing a bit funny - it is like 60 v 120+Hz all over again - once people get used to games which use a fully RT implementation of GI, etc. games that use older or hybrid techniques will look so dated in comparison.
 
End of the day you will only notice low res RT if it's on a static feature that is in view such as a car interior in GT7 whereas in R&C you will only notice if you pause and pixel peep since is a fluid, fast-moving game.

Resolution maybe, missing objects from reflections on the other hand... either way don't care since like i said, i'm getting ps5 day one just for the exclusives. I mean got the ps4 mostly for bloodborne only anyway.

But again, my posts werent related to PS5 as much as pointing out the fact that RT performance for RDNA2 (at least their console chips) isn't even close to what some people have been saying (out of questionable sources). Basically from all the gameplay shown so far one can easily tell that even though the console chips will reach 2070-2070super performance in rasterization, their RT will be lower tier (2060 range) even with the optimization / first party studios (which have bigger budgets for a single game than any pc game in development - except maybe star citizen but that doesn't really count imo).
 
I find this whole thing a bit funny - it is like 60 v 120+Hz all over again - once people get used to games which use a fully RT implementation of GI, etc. games that use older or hybrid techniques will look so dated in comparison.

So can you honestly say you can play a game you can analyse the screen in real-time and tell me what techniques were used? When playing a game I couldn't tell an SSR from an RT one, pause and analyse maybe.
 
So can you honestly say you can play a game you can analyse the screen in real-time and tell me what techniques were used? When playing a game I couldn't tell an SSR from an RT one, pause and analyse maybe.

Not in that manner - but when you have that level of implementation there is a perceptual difference to the whole scene just in the way light is captured and sub-conscious registry of how elements of the scene move in reflections relative to how they should. After playing around with custom (built for the RT feature) maps in Quake 2 RTX for awhile I find it very noticeable going to other games that graphically might be far more advanced in terms of visual complexity but using older techniques for lighting and reflections, etc.
 
So can you honestly say you can play a game you can analyse the screen in real-time and tell me what techniques were used? When playing a game I couldn't tell an SSR from an RT one, pause and analyse maybe.

For fast paced games,most competitive players drop graphical settings down anyway. All the extra clutter and effects actually gets in the way of a stable high FPS,and obscures people's vision when firing.

For an open world game which is slower paced,I don't really care about 120FPS,as 60FPS is perfectly fine.
 
If anything they should be wishing the consoles are really powerful and produce nice looking games,as that will force AMD/Nvidia to price their GPUs better.
Yes, you'd think that PCMR'ers would love it if consoles push the lower common standard upwards unless they are more concerned with how the price of entry into PC Gaming defines them as 'elite'.
However, nice looking doesn't mean good!
I am far more concerned about gameplay and AI. (Although multiplayer sort of side-steps the need for AI by having human players.)
I find it strange in these days of most phone SOCs having AI and NPU parts (often taking up a large percentage of the die), that neither of the new consoles have anything similar.
Now, I know AMD don't have any NPU to offer as part of their semi-custom but surely Sony or Microsoft could have licenced something.
Or at the minimum have a decent framework for AI effects. Properly multi-threaded at a minimum so the next Bethesda CRPG game doesn't try to run all the AI on one thread again.
 
Yes, you'd think that PCMR'ers would love it if consoles push the lower common standard upwards unless they are more concerned with how the price of entry into PC Gaming defines them as 'elite'.
However, nice looking doesn't mean good!
I am far more concerned about gameplay and AI. (Although multiplayer sort of side-steps the need for AI by having human players.)
I find it strange in these days of most phone SOCs having AI and NPU parts (often taking up a large percentage of the die), that neither of the new consoles have anything similar.
Now, I know AMD don't have any NPU to offer as part of their semi-custom but surely Sony or Microsoft could have licenced something.
Or at the minimum have a decent framework for AI effects. Properly multi-threaded at a minimum so the next Bethesda CRPG game doesn't try to run all the AI on one thread again.

I agree - better NPC AI,and more realistic worlds which are not static are far more immersive than pretty,static worlds.The next Bethesda game is Starfield and is running on Creation Engine. So I might upgrade my CPU first before my GPU. My modded Fallout 4 is massively CPU limited. Its not only AI but another thread is hammered by rendering. Its why Boston Commons has a problem because the rendering threads and AI threads get hammered.

Also I think AMD is trying to make a general purpose GPGPU uarch which can do a bit of everything. It will be much easier and cheaper for them to scale up and scale down products across multiple areas doing this. Samsung is after all using RDNA2 in its SOCs.
 
@Calin ah, gotcha. I'll reserve judgement until the final game is released. The omissions could be part of early development, or could be an optimisation choice. Personally I think the visual quality of the reflections is good, and if the devs can maintain that quality by not rendering reflections of objects that disappear behind explosion effects for half a second then I'd say that was a good trade off.

That also gives you just basic "mirror like" reflection, which can look artificial and overall it could be a noticeable difference and also can signal lack of performance for doing more advanced stuff, like reflections, GI and shadows at the same time.

So can you honestly say you can play a game you can analyse the screen in real-time and tell me what techniques were used?

Screen space reflections are easy to spot as they disappear once the object is at the edge of the screen or outside of it. Sure, for limited, scripted scenarios you can bake them in of sorts, just like global illumination. Going to dynamic time of day, objects and perspective, it can be quite noticeable.
 
So can you honestly say you can play a game you can analyse the screen in real-time and tell me what techniques were used? When playing a game I couldn't tell an SSR from an RT one, pause and analyse maybe.

Most console gamers they wouldn't really notice any difference but for PC players that read into the different graphics settings you soon see the pros and cons of each setting. SSR once you see the con it's really hard to not see it anymore it's really horrible to look at.

The same with shadow settings GTA 5 on PC once you seen the render bubble the player is in you see the draw backs of the game.
All round the player is a render circle anything within in it shadows etc are rendered out side they wait in que for the camera to get within render distance.

Sometime you just wish you can switch on a game and enjoy it lol
This is why I really like Ray tracing it fixes some of the most annoying graphics rendering issues that I hate.
 
Things such as VRS are being pushed by AMD and Nvidia too and upscaling. These both are trying console-like tricks to reduce image quality and only render stuff which you can see in your immediate FOV. So why shouldn't that apply to things such as RT??
 
Things such as VRS are being pushed by AMD and Nvidia too and upscaling. These both are trying console-like tricks to reduce image quality and only render stuff which you can see in your immediate FOV. So why shouldn't that apply to things such as RT??

From what I understand Ray tracing needs to always be running for it to work this is the reason for its high performance cost.
 
The same with shadow settings GTA 5 on PC once you seen the render bubble the player is in you see the draw backs of the game.
All round the player is a render circle anything within in it shadows etc are rendered out side they wait in que for the camera to get within render distance.

This will not change due to ray tracing. The limitation is the same - reduce rendering amount in order to achieve a performance target. The rays don't cast infinitely.

This happens for example with RT Shadows in SotTR:
https://www.gdcvault.com/play/1026163/
 
So Turing's low-quality and half-assed RT being upscaled into a blurry mess with DLSS is fine, but Ratchet and Clank not rendering RT reflections for objects hidden behind explosions is a bad thing, and therefore a 36 CU APU in a £500 console is naturally rubbish and a joke compared to a £1,200 dGPU. You can only see the drop in image quality from DLSS in static screens, so it's fine when in motion, but also only seeing missing items in Ratchet and Clank in static screens is very bad.

Have I got this correct?

And people wonder why I get suspended for swearing at this ludicrous, pathetic idiocy...
 
So Turing's low-quality and half-assed RT being upscaled into a blurry mess with DLSS is fine, but Ratchet and Clank not rendering RT reflections for objects hidden behind explosions is a bad thing, and therefore a 36 CU APU in a £500 console is naturally rubbish and a joke compared to a £1,200 dGPU. You can only see the drop in image quality from DLSS in static screens, so it's fine when in motion, but also only seeing missing items in Ratchet and Clank in static screens is very bad.

Have I got this correct?

And people wonder why I get suspended for swearing at this ludicrous, pathetic idiocy...
Don’t let Grim get to you. Jensen really managed to do a Jedi mind trick on him like no other I have seen when it comes to RTX. When he is not too busy spreading the good word or defending it, he is out slating any other form of it by the competition :p:D
 
So Turing's low-quality and half-assed RT being upscaled into a blurry mess with DLSS is fine, but Ratchet and Clank not rendering RT reflections for objects hidden behind explosions is a bad thing, and therefore a 36 CU APU in a £500 console is naturally rubbish and a joke compared to a £1,200 dGPU. You can only see the drop in image quality from DLSS in static screens, so it's fine when in motion, but also only seeing missing items in Ratchet and Clank in static screens is very bad.

Have I got this correct?

And people wonder why I get suspended for swearing at this ludicrous, pathetic idiocy...

You can run RT without having DLSS on.
 
So Turing's low-quality and half-assed RT being upscaled into a blurry mess with DLSS is fine, but Ratchet and Clank not rendering RT reflections for objects hidden behind explosions is a bad thing, and therefore a 36 CU APU in a £500 console is naturally rubbish and a joke compared to a £1,200 dGPU. You can only see the drop in image quality from DLSS in static screens, so it's fine when in motion, but also only seeing missing items in Ratchet and Clank in static screens is very bad.

Have I got this correct?

And people wonder why I get suspended for swearing at this ludicrous, pathetic idiocy...

If the console is doing it that way,it sounds the intelligent way to do it. If PCs don't do it that way,it sounds like a way to artificially increase RT load to sell more expensive GPUs,or lazy developers.

I don't want my GPU rendering effects not in my FOV as its not giving PC owners any benefit apart from worse performance,and the GPU being made to work harder.
 
Status
Not open for further replies.
Back
Top Bottom