• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Please explain what you mean by this.

There is some kind of fixed function unit to handle BVH calculations. I believe @Calin Banc posted a video earlier confirming that there was hardware involved.

The hybrid approach (fixed functional acceleration for a single node of the BVH tree) and the use of a shader unit to schedule processing solves the problems with [note: current raytracing solutions] for hardware-based and / or software-only solutions only, while maintaining flexibility because the shader unit can continue to control the overall computation, bypassing the fixed functional hardware as needed while still taking advantage of the performance advantage of the fixed functional hardware, and using the texture processor infrastructure eliminates large buffers for Ray storage and BVH caching, which are typically in a hardware raytracing solution because the existing VGPRs and the texture cache can be used in their place,which saves a lot of space and complexity of the hardware solution.

That combined with the video strongly suggests a hardware approach.
 
The performance of this solution if purley speculation on your part presented as fact, Nvidia's solution is 1:2 if not 1:1 resolution but the performance hit is 40%, we can't know yet but there are apparent slides that Ray Tracing on RDNA 2 GPU's has a performance hit of ~10%.

There is no reason why the resolution must be at 1:1 all the time or at all, not unless you want perfectly mirrored reflection's which in my view have in these RTX games looked completely ridiculous.
Exactly.
Can you image the debate of someone blowing up pictures of a game trying to prove that this ray tracing is better then on console? :p
 
The performance of this solution if purley speculation on your part presented as fact, Nvidia's solution is 1:2 if not 1:1 resolution but the performance hit is 40%, we can't know yet but there are apparent slides that Ray Tracing on RDNA 2 GPU's has a performance hit of ~10%.

There is no reason why the resolution must be at 1:1 all the time or at all, not unless you want perfectly mirrored reflection's which in my view have in these RTX games looked completely ridiculous.

I agree with you that we don't the performance yet of AMD's solution but it just seems like that it won't have the same performance as Nvidia's. We shall wait in hope!!

They aren't RTX games though, they are DXR games. And have you seen AMD's Ray Tracing demos lately?? Talk about reflections!!
 
That combined with the video strongly suggests a hardware approach.

It is a neat solution for repurposing existing design trajectory but it will fall far short ultimately performance wise when it comes to dedicated hardware making the space and complexity saving somewhat moot IMO - potentially you are looking at a 2.5x performance advantage at the top end to a more dedicated hardware solution over the approach AMD is going for versus a fairly small amount of extra die space relatively speaking.
 
There is some kind of fixed function unit to handle BVH calculations. I believe @Calin Banc posted a video earlier confirming that there was hardware involved.
That combined with the video strongly suggests a hardware approach.
You're backing up your baseless statement with repeating the same baseless statement in a different way.

I have explained how i think it works, that explanation comes from how the PS5 guy explained how it works, its an instruction set added to existing shaders, that is not 'extra hardware'

Exactly.
Can you image the debate of someone blowing up pictures of a game trying to prove that this ray tracing is better then on console? :p

Oh that's coming, the internet is going to be flooded with people holding magnifying glasses to their screens screaming "see... Nvidia better" tho i suspect they will be doing that with old RTX games at 42 FPS because IMO Nvidia will be using the same reduced resolution AMD will in DXR.
 
AMD's solution allows for a hybrid approach for more optimal results but much of the work they are doing hardware wise is opening up the ability to repurpose existing hardware functionality for ray tracing purposes without the bottlenecks you have in Pascal, etc. but ultimately it falls short of the efficiency of dedicated RT hardware by some margin.

Well if Nvidia move the denoising to tensor cores like it was supposed to Turing, then that's a lot of performance freed up.

I don't expect AMD's solution to be on par with Nvidia's right out the gate. Nvidia have that head start, but I am hoping that it has decent performance.
 
Well if Nvidia move the denoising to tensor cores like it was supposed to Turing, then that's a lot of performance freed up.

I don't expect AMD's solution to be on par with Nvidia's right out the gate. Nvidia have that head start, but I am hoping that it has decent performance.

Tensor cores are a bit of a complicated story from what I can see - it seems difficult to utilise them with a gaming workload unless you hit a certain threshold otherwise it is faster to do things via general compute.

EDIT: Was originally talk of using them in a clever way as part of the BVH process but that seems to have been abandoned - I don't really understand it so probably completely wrong but I'm assuming because it meant they could only be utilised where the output was valid for the next frame making for very inconsistent performance.
 
Last edited:
I didn't think you could, as newer cards have got new features, which wouldn't appear on the older panel would they ?

You are right, you don't get control over new features from the older control panel but the features that are exposed you can control. The features that aren't exposed still work you just cant change them from CP. I've done this myself back when I wanted to run without CP. Installed the driver through Device Manager and then manually installed the CP through its own installer in the driver package. Set whatever settings I wanted to set and then removed the CP.

I don't mind the new Radeon Settings though, I just disable the tray icon, disable the overlay and other small things from the General tab and everything is fine. It just takes a little time to figure out the silly layout.
 
You're backing up your baseless statement with repeating the same baseless statement in a different way.

I have explained how i think it works, that explanation comes from how the PS5 guy explained how it works, its an instruction set added to existing shaders, that is not 'extra hardware'

It's written there in plan simple english.

Here I will quote the relevant bit.

bypassing the fixed functional hardware as needed

You explained how you think it works. But yet you say my assumptions are baseless, yet I am basing it off both the Video explanation and the Patent? Surely yours are just as baseless? When Calin blanc posted the video and said it had dedicated hardware you agreed and said you were probably wrong. But now I say it has dedicated hardware you say I am wrong?

Because going off the patent, it's pretty clear there is a hardware involved. And that's how I think it works. I guess we shall find out.
 
Tensor cores are a bit of a complicated story from what I can see - it seems difficult to utilise them with a gaming workload unless you hit a certain threshold otherwise it is faster to do things via general compute.

EDIT: Was originally talk of using them in a clever way as part of the BVH process but that seems to have been abandoned - I don't really understand it so probably completely wrong but I'm assuming because it meant they could only be utilised where the output was valid for the next frame making for very inconsistent performance.

I read somewhere that temporal denoising didn't give very good results and that's why developers didn't use it.

Maybe it might be like DLSS, that the training has taken a lot longer than expected and will be released with better image quality in Tensor core denoising 2.
 
Because going off the patent, it's pretty clear there is a hardware involved. And that's how I think it works. I guess we shall find out.

From my understanding of it they are basically adding hardware to remove the limitations (i.e. communications with other sub-systems) and throughput restrictions that prevent the general shader architecture from being good for ray tracing but ultimately there are limits to how good, even when "unleashed" the shader architecture is at ray tracing even in situations where you can achieve low or zero penalty concurrency with other game rendering utilisation.

I read somewhere that temporal denoising didn't give very good results and that's why developers didn't use it.

Maybe it might be like DLSS, that the training has taken a lot longer than expected and will be released with better image quality in Tensor core denoising 2.

The current approaches are spatiotemporal - I think the limitations with Tensor cores went beyond that - possibly though to effectively utilise them meant too high a temporal component for good results.
 
It's written there in plan simple english.

Here I will quote the relevant bit.



You explained how you think it works. But yet you say my assumptions are baseless, yet I am basing it off both the Video explanation and the Patent? Surely yours are just as baseless? When Calin blanc posted the video and said it had dedicated hardware you agreed and said you were probably wrong. But now I say it has dedicated hardware you say I am wrong?

Because going off the patent, it's pretty clear there is a hardware involved. And that's how I think it works. I guess we shall find out.

You're right and i was wrong, what it isn't is dedicated RT cores, like Turing. The RT functions are a part of existing shaders, they just had an instruction added to them to make that happen, there probably is some sort of controller for this.
 
You're right and i was wrong, what it isn't is dedicated RT cores, like Turing. The RT functions are a part of existing shaders, they just had an instruction added to them to make that happen, there probably is some sort of controller for this.

Sounds like a much cheaper and more efficient method to be honest.
 
Annoys me a bit as I've been playing around with Quake 2 RTX a lot and seeing the potential - but seems AMD are just making some optimisations to existing hardware to be able to run it a bit rather than putting the effort this round into proper RT acceleration.

Probably would have required 2 GPUs and lower resolution (1080p?) to have it working in a somewhat decent way. Too expensive for console market, however it means that you can scale it up easily on the PC. :)

There is no reason why the resolution must be at 1:1 all the time or at all, not unless you want perfectly mirrored reflection's which in my view have in these RTX games looked completely ridiculous.

Depending per game could quite obvious that is lower less. Problem is that AMD doesn't have a DLSS solution, so nVIDIA could do 1:1 at lower res and scale it while still looking better. :)

PS: "Perfect reflections" apparently are cheaper (as per Digital Foundry), so that could be one reason why some games use them.

PS 2: I miss a lot the AMD drivers. So much better and user friendly (at least for my user case scenario)! :p
 
Oh that's coming, the internet is going to be flooded with people holding magnifying glasses to their screens screaming "see... Nvidia better" tho i suspect they will be doing that with old RTX games at 42 FPS because IMO Nvidia will be using the same reduced resolution AMD will in DXR.
I don't understand it really. It won't encourage anyone to buy $1000 gpus.
The way I'm seeing it on console alone is simply fantastic. Heck, it's in Spiderman Miles Morales. That will be the 1st open world game, at the time of this post, that offers it in a sandbox, open world environment like that. That's pretty spectacular if you ask me at console pricing.

It's a no sale to get someone to ditch consoles for a gpu at twice the price do to magnifying images show slight differences in RT/IQ. Lets hope that next gen GPUs are forced to compete in that space. Thus lowered price structure.
 
DLSS 1.0, you couldn't see the difference while actually playing, it was only when stood still, looking, and in all those DLSS articles, that had screenshots, Nvidia were slated to absolute buggery for it, and everyone was lolling over it, and still are.

Theres even a big sick of the slating its getting thread on here, so im expecting exactly the same for AMD, if it turns out, that RT on their cards, does look worse than on Nvidias, when you're just stood about looking, and not actually playing, and in the comparison to Nvidias articles, with all the side by side screenshots, that are inevitably coming. :p
 
DLSS 1.0, you couldn't see the difference while actually playing, it was only when stood still, looking, and in all those DLSS articles, that had screenshots, Nvidia were slated to absolute buggery for it, and everyone was lolling over it, and still are.

Theres even a big sick of the slating its getting thread on here, so im expecting exactly the same for AMD, if it turns out, that RT on their cards, does look worse than on Nvidias, when you're just stood about looking, and not actually playing, and in the comparison to Nvidias articles, with all the side by side screenshots, that are inevitably coming. :p

Moving goal posts here

They was a clear as day difference in motion DLSS was a complete blur fest on release. Nvidia definitely have improved the technology but its still not perfect like some on here would have you believe.

Amd will be using DirectML am sure Microsoft will push this for next gen Xbox to squeeze every bit of performance out of the console.
Sony most likely will stick with checkerboard rendering unless DirectML or similar works on Ps5.
 
Status
Not open for further replies.
Back
Top Bottom