• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
A Voxel is the rasterized grid, ^^^^ Nvidia's GI works in the same way, Rays Trace the light source to calculated which Voxels receive what light.

No it doesn't, when the 57s were released, they were the same price as the 2070s, so where was the extra cost on the 2070s, for the extra RT hardware then ?, when a card without any extra hw for RT, was the same price, id say the 2070s, gave you it for free based on that.

Granted if AMD release competitive GPU's then Nvidia will have to absorb that cost, which they did in this case moving the larger 545mm2 TU 104 (RTX 2080) down to what was the 445mm2 TU 106 (RTX 2070) price bracket in response to Navi.
 
Last edited:
No it doesn't, when the 57s were released, they were the same price as the 2070s, so based oin that, id say you were getting the extra HW for RT, for free. :p

What games are there that have ray tracing that run on a 5700? real-time to the same or any fidelity? Nvidia never said you "NEED" dedicated hardware to do ray tracing but if you want to do it real-time at anything like off line RT then you need it.

Like how you can do most 3d rendering on a CPU if you really want to. But anywhere decent performance with the same feature set you need a GPU in relation to help.
 
What games are there that have ray tracing that run on a 5700? real-time to the same or any fidelity? Nvidia never said you "NEED" dedicated hardware to do ray tracing but if you want to do it real-time at anything like off line RT then you need it.

Like how you can do most 3d rendering on a CPU if you really want to. But anywhere decent performance with the same feature set you need a GPU in relation to help.

I can make one, right now....
 
If you had two identical GPUs both with 100 cores:

GPU 1 is configured for 80 cores for rasterisation, and 20 cores for raytracing. They cannot cross over.

GPU2 is configured so that when RT is on, 20 of those cores are used, leaving 80 for rasterisation. But when RT is off, you have the full 100 available for rasterisation.

For otherwise identical GPUs, performance when RT is on should be the same. But when RT is off, GPU2 has more cores available and should be faster.

Hypothetically, this is correct. But we're comparing different hardware so it won't be. All that matters is what the performance is like with RT on and off, and the price. How they do it doesn't really matter.

the problem is that dedicated hardware will be/should be faster than "generic" one.

So if AMD does 100FPS rt OFF and 30-40fps ON
then nVIDIA will do 90FPS rt OFF and 50-60fps ON plus it has DLSS which can increase the performance of BOTH RT and non RT games if used!

Of course, in real life will also depend how busy you can keep all those cores even in non RT workloads. As we all know, AMD had some problems keeping the cards well feed (shader occupancy) which means although you have shaders dedicated to general compute, you can't still properly use them for rasterization even in the absence of RT. Yes, that gets fixed with RDNA1 and with RDNA2 there is no more GCN (or so they say), but it remains to be seen how good this actually gets implemented. For instance, in the above example, you can do very well screen space reflections as everything is in camera view... Plus, just like in Control, some reflections are overdone just so is shinny.
 
Yes I do.?

You obviously don't understand, or else you wouldn't have posted this.

But the Nvidia GPU will only have 80% of its cores for normal rendering full stop, because 20% are dedicated to RT?

So isn't it the same?

Nvidia's solution has dedicated RT cores. You don't seem to get this. They aren't the same as the normal cores, they are specialised for Ray Tracing. They handle the type of calculations needed for Ray Tracing much faster than normal cores.

To put it another way. *the numbers used in the examples below aren't real, they are only highlighting the difference*

Nvidia's solution for ordinary games- 100 normal cores.
Nvidia's solution for Ray Traced games - 100 normal cores + 20 specialised RT cores.

AMD's Solution for normal games - 100 normal cores.
AMD's solution for Ray Traced games - 80 normal cores + 20 Normal cores that are going to be used for Ray Traced acceleration.

Which is why I am having concerns over the Ray Tracing performance of the cards from AMD.
 
You obviously don't understand, or else you wouldn't have posted this.



Nvidia's solution has dedicated RT cores. You don't seem to get this. They aren't the same as the normal cores, they are specialised for Ray Tracing. They handle the type of calculations needed for Ray Tracing much faster than normal cores.

To put it another way. *the numbers used in the examples below aren't real, they are only highlighting the difference*

Nvidia's solution for ordinary games- 100 normal cores.
Nvidia's solution for Ray Traced games - 100 normal cores + 20 specialised RT cores.

AMD's Solution for normal games - 100 normal cores.
AMD's solution for Ray Traced games - 80 normal cores + 20 Normal cores that are going to be used for Ray Traced acceleration.

Which is why I am having concerns over the Ray Tracing performance of the cards from AMD.

dan does understand mate, in your example you’re giving Nvidia 20 more cores!! Also with the fixed number of RT how often do the rest of cores work at 60% whilst the RT are at 100% and can’t keep up?

a dynamic solution where normal cores could switch to RT when needed could create a GPU where 100% could be utilised the whole time RT on or OFF.

who knows what changes AMD have made to their cores to make them more efficient at RTing? we’ll just have to wait and see
 
Last edited:
Make one that runs on the 5700XT that supports DX12 Ultimate and DXR 1.1

DX12 ultimate isn't out yet, why does that even matter? DX12 Ultimate / DXR will be dedicated GPU agnostic Ray Tracing at the API level, instead of having engine or game developer proprietary version its a standard DX API feature, its still all the same technology.

Something i knocked up in a couple of hours a few weeks ago, this is Global Illumination, i over saturated it to make it more obvious....

Cryengine 5.6, this is the pre- Neon Noir demo which is the same technology that's already existed since Cryengine 3.7, Neon Noir is dedicated work to refine it to compete with the now marketed RTX, that engine version is 5.7, already over due for release. With DX12 Ultimate it seems pointless now TBH.

Should be 1440P once Youtube is done with encoding it.


 
dan does understand mate, in your example you’re giving Nvidia 20 more cores!! Also with the fixed number of RT how often do the rest of cores work at 60% whilst the RT are at 100% and can’t keep up?

a dynamic solution where normal cores could switch to RT when needed could create a GPU where 100% could be utilised the whole time RT on or OFF.

I am giving them 20 more cores in my example, because that's what they have in reality. The 2070 Super has 2560 shader cores, 320 Tensor cores and 40 RT cores.

The bottleneck is the other way around at the moment. Because Ray Tracing is at a Hybrid stage, it's heavily reliant on Raster performance.

So, if AMD's solution is using normal cores to do Ray Tracing, that's going to further reduce the Raster performance which is needed to improve RT performance.

Do you see the issue yet with using the same cores for both?

But, maybe AMD's solution for their desktop cards is different than what is been used on the consoles.
 
I am giving them 20 more cores in my example, because that's what they have in reality. The 2070 Super has 2560 shader cores, 320 Tensor cores and 40 RT cores.

The bottleneck is the other way around at the moment. Because Ray Tracing is at a Hybrid stage, it's heavily reliant on Raster performance.

So, if AMD's solution is using normal cores to do Ray Tracing, that's going to further reduce the Raster performance which is needed to improve RT performance.

Do you see the issue yet with using the same cores for both?

But, maybe AMD's solution for their desktop cards is different than what is been used on the consoles.

so RT tanks FPS in say BFV because raster performance is bad, not because there aren’t enough RT cores. Right
 
@melmac a 5700xt has 2560 cores as well, and no dedicated RT cores (according to google search). And its slower than 2070 super in RT as would be expected, because it has less cores. It's also slower in raster as NVidia's solution is just better overall (and more expensive).

But what we are saying here is what if AMDs future GPU has the same number of cores IN TOTAL as Nvidia's equivalent GPU. Lets say they BOTH had a total of 3000 cores. But Nvidia's solution dedicates 80 of those to RT and AMD's doesn't. In that scenario, AMD might be faster in raster because they can dedicate all cores to raster, whereas Nvidia cannot.

if you don't normalise for core count, you are comparing apples to oranges. If all Nvidia's future GPUs have more cores than AMDs, then yes they would be expected to be faster.


But none of this really matters because what we care about is:
* performance RT on
* performance RT off
* power use
* price.

it doesn't matter how they achieve it.
 
Unless I'm wrong, at present we have no idea how AMD are going to do the "hardware accelerated ray tracing". Or have they told us and I've just missed it?
 
DX12 ultimate isn't out yet, why does that even matter? DX12 Ultimate / DXR will be dedicated GPU agnostic Ray Tracing at the API level, instead of having engine or game developer proprietary version its a standard DX API feature, its still all the same technology.

The DXR API out at the moment is GPU agnostic. DXR 1.1 adds support for the software fall back layer which DXR doesn't support.

Just like DXR 1.0 It will be up to Nvidia/AMD to provide the API's/hardware to do hardware accelerated Ray Tracing on their GPUs.
 
@melmac a 5700xt has 2560 cores as well, and no dedicated RT cores (according to google search). And its slower than 2070 super in RT as would be expected, because it has less cores. It's also slower in raster as NVidia's solution is just better overall (and more expensive).

But what we are saying here is what if AMDs future GPU has the same number of cores IN TOTAL as Nvidia's equivalent GPU. Lets say they BOTH had a total of 3000 cores. But Nvidia's solution dedicates 80 of those to RT and AMD's doesn't. In that scenario, AMD might be faster in raster because they can dedicate all cores to raster, whereas Nvidia cannot.

if you don't normalise for core count, you are comparing apples to oranges. If all Nvidia's future GPUs have more cores than AMDs, then yes they would be expected to be faster.

The RT hardware is hanging off the regular SMs - sure in theory without them you could maybe add on a few more cores but it isn't like they've replaced regular cores with RT cores and removing them wouldn't free up enough space for many extra normal cores (the actual physical space taken up by them is more like 1/3rd what it looks like from the diagrams).
 
Which is why I am having concerns over the Ray Tracing performance of the cards from AMD.

What we'll likely see in practice is the developer pick and choose the scenes more selectively for RT effects i.e. where there is spare capacity. The twitch gamers seeking refresh rates faster than an MP's false expenses denial will just leave RT off all together so its a fair trade for image quality if done right. What's interesting in this scenario is trying to calculate any framerate advantage Nvidia might gain and will their hardware even be allowed by the developers to display RT effects in certain scenes or will it be a driver thing? I don't is the honest answer but its sure going to make comparisons between the two tougher i.e. the true loss or gain in performance where it matters instead of some average figures.
 
Unless I'm wrong, at present we have no idea how AMD are going to do the "hardware accelerated ray tracing". Or have they told us and I've just missed it?

From a post further up it seems like the console RT solution uses the normal cores for both Ray Tracing and Normal rendering. When you turn off Ray Tracing it uses all the cores for normal gaming.

There doesn't seem to be any dedicated RT hardware in use.

So, can we assume that AMD's desktop solution will be same? The discussion stemmed from me wondering about the Ray Tracing performance if that is the case.
 
Unless I'm wrong, at present we have no idea how AMD are going to do the "hardware accelerated ray tracing". Or have they told us and I've just missed it?


Frankly i don't think that's what they are doing, there is no "Dedicated Ray Tracing Hardware" they are simply using Micrsofts RT API which doesn't require dedicated Ray Tracing hardware.

The DXR API out at the moment is GPU agnostic. DXR 1.1 adds support for the software fall back layer which DXR doesn't support.

Just like DXR 1.0 It will be up to Nvidia/AMD to provide the API's/hardware to do hardware accelerated Ray Tracing on their GPUs.

Right...
 
From a post further up it seems like the console RT solution uses the normal cores for both Ray Tracing and Normal rendering. When you turn off Ray Tracing it uses all the cores for normal gaming.

There doesn't seem to be any dedicated RT hardware in use.

So, can we assume that AMD's desktop solution will be same? The discussion stemmed from me wondering about the Ray Tracing performance if that is the case.

Feels to me like AMD is trying to make the story about a sub-set of features in a game using RT rather than what nVidia is trying to do with moving the industry towards a wholesale RT implementation.
 
Feels to me like AMD is trying to make the story about a sub-set of features in a game using RT rather than what nVidia is trying to do with moving the industry towards a wholesale RT implementation.

AMD are going with Microsoft's API, like the rest of the industry will, Including Nvidia.
 
What we'll likely see in practice is the developer pick and choose the scenes more selectively for RT effects i.e. where there is spare capacity. The twitch gamers seeking refresh rates faster than an MP's false expenses denial will just leave RT off all together so its a fair trade for image quality if done right. What's interesting in this scenario is trying to calculate any framerate advantage Nvidia might gain and will their hardware even be allowed by the developers to display RT effects in certain scenes or will it be a driver thing? I don't is the honest answer but its sure going to make comparisons between the two tougher i.e. the true loss or gain in performance where it matters instead of some average figures.

If games are using DXR or Vulkan then both will be using the same effects. It will be just like normal games at the moment. For example if you have a card that can't run high levels of AA then you have to turn if down. So If AMD's or Nvidia's card is slower in Ray Tracing they will have to turn it down or off. There isn't one Ray tracing for AMD and one for Nvidia it's DXR or Vulkan.
 
Status
Not open for further replies.
Back
Top Bottom