• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RTX performance overhead - 9.2ms (ouch!)

Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
Remedy reveals early performance overhead for real-time ray tracing effects in Northlight Engine.

https://www.dsogaming.com/news/reme...ime-ray-tracing-effects-in-northlight-engine/
As Golem reported, contact and sun shadows – calculated with two beams per pixel and with also including noise rejection – require 2.3ms per frame and the reflections require 4.4 ms per frame. As for the Global denoising lighting, it requires an additional 2.5ms. As such, we are basically looking at a 9.2ms performance overhead for the real-time ray tracing effects that Remedy will implement in – most likely – Control.

I hadn't noticed the breakdown on frame times before. This is with the flagship 2080Ti at 1080p.
 
Yeah - puts cards like the 2080 in an odd place as only the Ti is remotely feasible for using the effects to a level worth it over more traditional techniques.

On the other hand if they can get the effects up to a level which can truly leverage the potential of the tech 9.2ms is a small price to pay.
 
9.2ms isn't that bad and I am sure given time and resources, that will be even quicker. New tech takes time.

This ^

Compared to using MSAA which takes massive GPU resources 9.2ms is quite small.

Having said that the article in the link is very poor as a 2070 and 2080 Ti will not take the same time per frame, 9.2ms on the 2080 Ti is going to be nearer 18ms on a 2070.
 
This ^

Compared to using MSAA which takes massive GPU resources 9.2ms is quite small.

Having said that the article in the link is very poor as a 2070 and 2080 Ti will not take the same time per frame, 9.2ms on the 2080 Ti is going to be nearer 18ms on a 2070.


Like with anything, people with slower cards have to turn down options or resolution. When DX9 pixel shaders became a thing only the high end cards could really power games properly, and even then it took a generation or 2 to properly realize the effects. The 2080ti is much like the ATI 9700pro, very cool jump in tech but only just enough
 
9.2ms doesnt sound like a lot.

Vysnc at 60hz is like an extra 60-70ms and there is plenty of people that say they can't notice it.

Many in game settings are adding much more than 9ms.

I could be thinking of wrong thing here though.
 
Last edited:
9.2ms is more than half the time available to produce a frame for 60FPS, it's not that bad if you play Minecraft :p

And remember, this is at 1080p on the flagship 2080Ti.

This ^

Compared to using MSAA which takes massive GPU resources 9.2ms is quite small.

Having said that the article in the link is very poor as a 2070 and 2080 Ti will not take the same time per frame, 9.2ms on the 2080 Ti is going to be nearer 18ms on a 2070.

DLSS is a separate 'issue', 9.2ms per frame is required to provide the RTX version of ray tracing.

9.2ms is not a lot.

Vysnc at 60hz is like an extra 60-70ms and there is plenty of people that say they can't notice it.

Many in game settings are adding much more than 9ms.

We are talking about frame times, you are confusing this with input lag.
 
The global illumination, lighting and shadowing of mdoern games can takes well over 10ms , RTX replaces all of that
 
9.2ms is more than half the time available to produce a frame for 60FPS, it's not that bad if you play Minecraft :p

And remember, this is at 1080p on the flagship 2080Ti.



DLSS is a separate 'issue', 9.2ms per frame is required to provide the RTX version of ray tracing.



We are talking about frame times, you are confusing this with input lag.

Yeah I did think that after I wrote it that it's probably per frame rather than a delay.

So an extra 9ms per frame will make it choppy?
 
The global illumination, lighting and shadowing of mdoern games can takes well over 10ms , RTX replaces all of that

This...

I think a lot of people are going to be very pleasantly surprised once this is out in the wild and beginning to mature.
 
9.2ms is more than half the time available to produce a frame for 60FPS, it's not that bad if you play Minecraft :p

And remember, this is at 1080p on the flagship 2080Ti.



DLSS is a separate 'issue', 9.2ms per frame is required to provide the RTX version of ray tracing.



We are talking about frame times, you are confusing this with input lag.


Not a separate issue when the 2080 Ti has nearly twice the spec of a 2070 for the bits that do the Ray Tracing.:)
 
This just in guys! Ray tracing reduces frame rates!!!1!

Are they not just saying that the ray tracing effects in their game take an additional 9.2ms to render per frame so instead of 60FPS you'll get about 40FPS... or am I missing something, feel like I'm missing something.
 
Are they not just saying that the ray tracing effects in their game take an additional 9.2ms to render per frame so instead of 60FPS you'll get about 40FPS... or am I missing something, feel like I'm missing something.

You're not missing anything, but it's quite a bit more complex than that, which the article itself doesn't really explain.

- The 9.2ms is an approximate figure, arrived at by simple adding up a rough time taken for potentially applied aspects of raytracing. In this case, denoising, global illumination and reflections. Merely summing these averages is going to be widely inaccurate, as any given game, or even specific scene, may or may not use all of these techniques, and for those that do, they will be used to varying degrees.

- The article makes no mention of the potential for decoupling the RT resolution from the rasterisation resolution. Simply put, there is nothing to prevent developers choosing to perform all raytracing calculations at 1080p, or even lower, while still rendering the game geometry and traditional rasterisation aspects at full native game resolution, including 4K if selected.

- If a game engine is harnessing raytracing, then it will no longer be required to apply baked in shadows, faked global illumination or a whole host of other techniques that currently get employed in order to fake lighting and reflections. This will free up a significant amount of time each frame and offset at least some of the increased cost of raytracing.

- We are dealing with a single implementation, from a single developer, making the sample size we have to work with extremely small. Think about the performance disparity between similar games from different developers.

In a nutshell, what i'm saying is, the figures in the article aren't particularly useful, but they're interesting and at least in my view, somewhat encouraging nevertheless.
 
Last edited:
This just in guys! Ray tracing reduces frame rates!!!1!

Are they not just saying that the ray tracing effects in their game take an additional 9.2ms to render per frame so instead of 60FPS you'll get about 40FPS... or am I missing something, feel like I'm missing something.

No you aren’t missing anything. It’s the same story but with the added benifit of a single game dev giving out numbers on their engines use of Ray Tracing. It’s literally not even a story...

It's quite a bit more complex than that, which the article itself doesn't really explain.

- The 9.2ms is an approximate figure, arrived at by simple adding up a rough time taken for potentially applied aspects of raytracing. In this case, denoising, global illumination and reflections. Merely summing these averages is going to be widely inaccurate, as any given game, or even specific scene, may or may not use all of these techniques, and for those that do, they will be used to varying degrees.

- The article makes no mention of the potential for decoupling the RT resolution from the rasterisation resolution. Simply put, there is nothing to prevent developers choosing to perform all raytracing calculations at 1080p, or even lower, while still rendering the game geometry and traditional rasterisation aspects at full native game resolution, including 4K if selected.

- If a game engine is harnessing raytracing, then it will no longer be required to apply baked in shadows, faked global illumination or a whole host of other techniques that currently get employed in order to fake lighting and reflections. This will free up a significant amount of time each frame and offset at least some of the increased cost of raytracing.

- We are dealing with a single implementation, from a single developer, making the sample size we have to work with extremely small. Think about the performance disparity between similar games from different developers.

In a nutshell, what i'm saying is, the figures in the article aren't particularly useful, but they're interesting nevertheless. That said, I think that these numbers are actually extremely encouraging.

Think about it. If they are saying it adds 9.2ms then obviously this would be without the other stuff you mentioned being there. Why would they put the RTX stuff on top of the other crap and then calculate the timings. Common sense goes a long way.
 
I presume that the 9.2ms is in addition to the render time for the rest of the frame?
 
I presume that the 9.2ms is in addition to the render time for the rest of the frame?

The article states the following:

noise rejection - require 2.3ms per frame and the reflections require 4.4 ms per frame. As for the Global denoising lighting, it requires an additional 2.5ms.

So yes, that's in addition to handling the geometry, post processing, and everything else, each frame. But it also means that a great number of traditional lighting techniques that currently can take a significant amount of time, will no longer need to take place because RT completely replaces them.

So like I said, without really understanding what that translates to in terms of overall time to render per frame, the numbers aren't particularly useful.
 
You're not missing anything, but it's quite a bit more complex than that, which the article itself doesn't really explain.

- The 9.2ms is an approximate figure, arrived at by simple adding up a rough time taken for potentially applied aspects of raytracing. In this case, denoising, global illumination and reflections. Merely summing these averages is going to be widely inaccurate, as any given game, or even specific scene, may or may not use all of these techniques, and for those that do, they will be used to varying degrees.

- The article makes no mention of the potential for decoupling the RT resolution from the rasterisation resolution. Simply put, there is nothing to prevent developers choosing to perform all raytracing calculations at 1080p, or even lower, while still rendering the game geometry and traditional rasterisation aspects at full native game resolution, including 4K if selected.

- If a game engine is harnessing raytracing, then it will no longer be required to apply baked in shadows, faked global illumination or a whole host of other techniques that currently get employed in order to fake lighting and reflections. This will free up a significant amount of time each frame and offset at least some of the increased cost of raytracing.

- We are dealing with a single implementation, from a single developer, making the sample size we have to work with extremely small. Think about the performance disparity between similar games from different developers.

In a nutshell, what i'm saying is, the figures in the article aren't particularly useful, but they're interesting nevertheless. That said, I think that these numbers are actually extremely encouraging.

Thanks for the info.

Yup I understand that their 9.2ms figure is just a rough estimate, will vary a lot ETC.

I had no idea that a dev could choose to run the RT effects at a different res than everything else, how would they mesh the two together and would there be a delay for that aswell?

The article says, an additional 9.2ms, so surely that means they tested with RT off and on, with it on it takes 9.2ms longer on average to render each frame, with RT on I'm assuming the traditional methods for GO, shadows ETC.. aren't being used, so as we expected its a more expensive way to produce these effects. Someone else mentioned earlier that traditional methods for lighting,shadows,GO take ~10ms and that RT replaces them, if that's true and RT only takes 9.2ms then with RT on you'd get better frame rates, which obviously is not the case.
 
Thanks for the info.

Yup I understand that their 9.2ms figure is just a rough estimate, will vary a lot ETC.

I had no idea that a dev could choose to run the RT effects at a different res than everything else, how would they mesh the two together and would there be a delay for that aswell?

The article says, an additional 9.2ms, so surely that means they tested with RT off and on, with it on it takes 9.2ms longer on average to render each frame, with RT on I'm assuming the traditional methods for GO, shadows ETC.. aren't being used, so as we expected its a more expensive way to produce these effects. Someone else mentioned earlier that traditional methods for lighting,shadows,GO take ~10ms and that RT replaces them, if that's true and RT only takes 9.2ms then with RT on you'd get better frame rates, which obviously is not the case.


No, they don;t say it takes an additional 9.2ms, only that the RTX pirple takes 9.2ms. Removing the shadowing and GI of the traditional pipeline might mean the RTX is actually faster.

Rendering the RTX lighting at a lower resolution and blending into the higher resolution traditional rasterization is trivial and no cost at a primitive level with a few artifacts, but even a clean solution is really cheap. This is done all the time, with traditional global illumination and lighting it is usually computed at a much lower resolution and you simply interpolate when baking the lighting and shadow maps to the textures.
 
I had no idea that a dev could choose to run the RT effects at a different res than everything else, how would they mesh the two together and would there be a delay for that aswell?

It's a common technique currently, game engines often varying shadow resolution etc. All it would really mean is less accurate raytracing, there would be no specific delay added to deal with differing resolutions.

The DICE devs have already said that this is something they are intending on doing in their Battlefield V RTX implementation, so perhaps we'll see a raytracing quality slider (low/medium/high) with the RT resolution being the differentiating factor.

The article says, an additional 9.2ms, so surely that means they tested with RT off and on

That's not what I took from the article. It seemed to me that that were describing the time it takes to complete each individual aspect of the process as applied in their engine, and then summing them to make a total of 9.2ms required to perform all raytracing, for each frame.

This says nothing about what they are now able to leave out of the rendering pipeline, which is no longer required when using raytracing for lighting etc.

so as we expected its a more expensive way to produce these effects

For all intents and purposes, yes.

Someone else mentioned earlier that traditional methods for lighting,shadows,GO take ~10ms and that RT replaces them, if that's true and RT only takes 9.2ms then with RT on you'd get better frame rates, which obviously is not the case.

That's going to vary widely from game to game and implementation to implementation. Some games certainly have historically taken a huge amount of time per frame to bake in lighting and shadow maps etc. But the variation is way too wide to put any kind of meaningful figure on it.

Personally, given that developers have gotten better and better at optimising these 'fake' techniques, I would expect replacing them with raytracing to still carry a reasonably significant overhead. But that 9.2ms increase, may actually result in something closer to a 4-5ms increase in total render time per frame, once the 'fake' techniques become defunct. In fact, there's nothing to say that we couldn't see a slight performance increase in some very specific instances, especially if we are talking about an extremely poorly optimised engine.

But until we have games to play with, and until the developers really get to grips with it, all we can do is speculate.
 
Back
Top Bottom