• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
@Grim5 if you look at his history of posts on this thread you'll see hes been harping on about Nvidias supreme performance with DLSS and RTX in this thread for ages, even bringing up some random bar chart that was leaked on the net a few days ago and is more than likely fake.

If you read my post you would see i was refering to that exact bar graph, if he is going to use that as a stick to beat AMD with yet again, he is again making himself look foolish, as according to that graph, AMD's Ray Tracing perf on this their first attempt is superior to Nvidias lol...

I read your post. You mentioned 'nvidia owners talking about dedicated hardware'. Grim didnt, that was me. And i'm butthurt owner, that's for sure :)
 
I'm not sure why you keep mentioning the 3070? I was talking about the 3080/90, cards that have launched versus rumoured launch dates of ANY AMD cards.

I'm not interested in an argument so I'll just leave it there.

You need to read your own posts buddy. Case closed.
 
I read your post. You mentioned 'nvidia owners talking about dedicated hardware'. Grim didnt, that was me. And i'm butthurt owner, that's for sure.

Like i said, read back through the history of this part of the forum, Grim constantly harps on about superior Nvidia with their RTX and DLSS etc... my point is, if your going to produce a random bar graph to laugh at someone, atleast make sure the graph doesnt actually make you look like a fool in the process haha.

Why would you be butthurt owning Nvidia unless you just splashed out on the 3080 or even worse the 3090? when Turing launched it was the first card to utilise RTX, it broke new ground, no shame in buying into it, even if it was never really quite upto the job.

I do applaud Nvidia for bringing forward new tech, its just a shame they like to blackbox it and not play well with others... if Nvidia took AMD's open source approach, a lot more people would have a lot more respect for them. This goes back years, Gsync is another example etc, but thats another story.

I had a 1070 and it was a really decent card, but the 2xxx series for me personally offered nothing i was interested in, i didnt feel like paying for features i felt personally were not really ready (RTX), and i still think Ray Tracing is still too early in its infancy to really be used as a selling point for anything, including consoles. Maybe in 5 years time we may be at a point where its actually worth including, many wont agree with this, but this is my personal opinion.

I definitely would never go out of my way right now to buy games specifically for Ray Tracing, and definitely would not even consider hardware for it.
 
Currently Tensor cores are not used as part of the de-noising process - while they can provide a significant speed boost there are complications to implementing it - currently the most common implementation is Spatiotemporal Variance-Guided Filtering using compute shader implementation.
 
You will need to elaborate that till i have the time to peacefully peruse white papers.. my guess is its a local filter and doesnt involve any complex matrix math

There's a some detail about it here: https://www.pcgamesn.com/microsoft/xbox-series-x-gpu-cpu-specs

Of course, that’s largely the inclusion of ray tracing acceleration hardware. RDNA 2 is built to support the latest DXR 1.1 standard, and will utilise onboard silicon to accelerate certain compute-heavy tasks in the ray tracing pipeline, much like Nvidia Turing. The RT Cores in Turing accelerate Bounding Volume Hierarchy (BVH) construction and ray/triangle intersection to offload from traditional GPU shader cores. AMD’s approach is expected to operate in much the same way.

But unlike Turing, RDNA 2 will make use of a novel inference technique in lieu of dedicated AI cores like Nvidia’s Tensor Cores – which are currently used for denoising tasks. As Digital Foundry notes, “rapid-packed math is back”, and RDNA 2 is built to focus on even lower precision integer operations in order to accelerate inference.

In keeping with Turing, however, mesh shaders are being opted for with RDNA 2. Nvidia implemented these to simplify geometry pipelines, turning them towards the compute programming model, and delivering more bang for your buck (in computational terms).

I am assuming there wont be any profound changes in RT between the console and PC RDNA2 variants, but you never know.
 
But unlike Turing, RDNA 2 will make use of a novel inference technique in lieu of dedicated AI cores like Nvidia’s Tensor Cores – which are currently used for denoising tasks. As Digital Foundry notes, “rapid-packed math is back”, and RDNA 2 is built to focus on even lower precision integer operations in order to accelerate inference

I will need a fundamental explanation...
btw there's no way i see filtering done at anything below fp32 precision.. that article looks like a proper school teacher who explains stuff he doesnt understand

Edit: Seems Rroff has better clarity.. we will just have to go through Spatiotemporal Variance Guided Filtering white papers and see if its worth the debate
 
I will need a fundamental explanation...
btw there's no way i see filtering done at anything below fp32 precision.. that article looks like a proper school teacher who explains stuff he doesnt understand

I don't believe any game implementation of ray tracing currently uses the Tensor cores for denoising - there was talk of additionally using them to accelerate some part of the BVH process early on but not sure details there and doesn't seem to have happened. I believe the problem is it is difficult to load them up efficiently without impacting on overall rendering performance if you don't balance things right due to the way they are implemented in Turing though should be less of an issues on Ampere.
 
There is an ocean between using MCM or chiplets for CPUs and GPUs - very little actually transfers over beyond a very general sense.
True, but the concept of strapping smaller bits into a greater package is what AMD have done for 2 CPU generations now. Clearly at some point they decided the future of monolithic dies is quite short, so went a different route. Just look at the monstrosities Nvidia have been banging out for years. Intel apparently have done the same because they went MCM from the outset with Xe.

I never said AMD could make a chiplet GPU just because they make a chiplet GPU, I said why would AMD conceptually stay with monolithic GPUs when they're already thinking MCM CPUs? Especially when their competitors are also conceptualising chiplet GPUs.
 
To be perfectly honest if AMD got between 3070 and 3080 Ray Tracing performance on their first attempt i'd call that a win, and id be exceedingly worried if i was an Nvidia owner with all their "Dedicated Ray Tracing Hardware" especially given the same Hardware AMD will be using will be in the PS5 and XboxSX, meaning game devs will be more inclined to offer RT that works across all hardware rather than once again Nvidia blackbox implementation of it...



If you want to drive something you have to look at adoption rate, if im a game Dev and im making a game for XboxSX, PS5 and PC, and i want to use Ray Tracing, im not even going to consider RTX at first as it simply will not work on XboxSX and PS5, so im going to go with their implementation, and potentially bolt on RTX features for the PC port, all the while those without RTX hardware are still going to get a decent implementation if they have RDNA2 hardware on hand.

You are a little mixed up. Games aren't designed using RTX, Games will either have Ray Tracing or they won't. If they have Ray tracing and you have a Nvidia card they will use Nvidia's RT solution. If the game has Ray Tracing and you have an AMD card they will use AMD's solution. And if you have a GPU without any hardware RT solution it will use the software fallback layer(at least on Directx and Vulkan games) But this last one will be much slower.

Both AMD and Nvidia's implementation will be "blackbox" as in neither one will work on the other's hardware. Just like Nvidia's drivers won't work on AMD GPUs and vice versa.
 
If you want to drive something you have to look at adoption rate...
All thanks to Nvidia for starting the real-time Ray Tracing revolution, because if they hadn't made that first step nobody would've done.
All thanks to AMD for actually popularising it, because if they hadn't put it in consoles, nobody would give 2 ***** about "RTX"
 
Status
Not open for further replies.
Back
Top Bottom