• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Haven’t Sony said their RT is hardware dedicated or something along those lines? If they are in fact using ‘normal’ cores that’s a bit of a porky isn’t it. If cores have been tweaked to make them significantly better at RT, but still with no dedicated RT cores, that would surely be, dependant on performance, the best of both worlds, right?

edit also let’s not forget the tensor cores that are needed to make those RT cores look good too, and their die space
 
Last edited:
Haven’t Sony said their RT is hardware dedicated or something along those lines? If they are in fact using ‘normal’ cores that’s a bit of a porky isn’t it. If cores have been tweaked to make them significantly better at RT, but still with no dedicated RT cores, that would surely be, dependant on performance, the best of both worlds, right?

Yeah, obviously i don't know. But my guess is "porkies" I just don't think those RDNA 2 GPU's have dedicated RT acceleration, they just have a lot of Rasterization muscle...
 
What i said earlier....

For as long as Nvidia have dedicated Ray Tracing Hardware in their GPU's they will be better at it, no getting away from that... however that comes at a die space cost, a cost that is passed on to you.

I've said it before and i'll say it again, Nvidia's RTX is just Ray Tracing, there is nothing special about it and its been done long before, it is overcooked to justify the dedicated hardware, however when properly optimized that dedicated hardware will always do it to a higher degree of performance than not having it.
 
Last edited:
@melmac a 5700xt has 2560 cores as well, and no dedicated RT cores (according to google search). And its slower than 2070 super in RT as would be expected, because it has less cores. It's also slower in raster as NVidia's solution is just better overall (and more expensive).

But what we are saying here is what if AMDs future GPU has the same number of cores IN TOTAL as Nvidia's equivalent GPU. Lets say they BOTH had a total of 3000 cores. But Nvidia's solution dedicates 80 of those to RT and AMD's doesn't. In that scenario, AMD might be faster in raster because they can dedicate all cores to raster, whereas Nvidia cannot.

if you don't normalise for core count, you are comparing apples to oranges. If all Nvidia's future GPUs have more cores than AMDs, then yes they would be expected to be faster.


But none of this really matters because what we care about is:
* performance RT on
* performance RT off
* power use
* price.

it doesn't matter how they achieve it.

Well for a start the core count between AMD and Nvidia can be the same but that doesn't mean the performance will be the same. VEGA 64 had 4096 Cores but the 1080Ti had only 3584.

Having more cores does not mean it will be faster in raster performance. There are other factors involved.

I was only using the example of 100 cores to illustrate a point.

as for your edit. This is an enthusiast forum, people like to debate the pro and cons of any GPU hardware and software. Try to reason why one solution might be better than the other etc. etc.
 
Yeah, obviously i don't know. But my guess is "porkies" I just don't think those RDNA 2 GPU's have dedicated RT acceleration, they just have a lot of Rasterization muscle...
Which is what most people seem to want anyway, so if they can keep up or beat the 3080Ti/3090 in rasterizing using this "muscle" then they will have done well, assuming they price it competitively that is.
 
Well for a start the core count between AMD and Nvidia can be the same but that doesn't mean the performance will be the same. VEGA 64 had 4096 Cores but the 1080Ti had only 3584.

Having more cores does not mean it will be faster in raster performance. There are other factors involved.

I was only using the example of 100 cores to illustrate a point.

as for your edit. This is an enthusiast forum, people like to debate the pro and cons of any GPU hardware and software. Try to reason why one solution might be better than the other etc. etc.

Vega is RTG's "Bulldozer" moment.

2560 core vs 4096.

By the same token.... 2560 cores vs 3584 ;)

kqJ9o0C.png
 
Which is what most people seem to want anyway, so if they can keep up or beat the 3080Ti/3090 in rasterizing using this "muscle" then they will have done well, assuming they price it competitively that is.

Yeah...

Nvidia is the first company to support that API, it's the rest of the industry that is following them.

What API? DX12 Ultimate? not even out yet.
 
Looking forward to benchies comparing RDNA2 and Ampere using say Cyberpunk 2077 :D

I am going Nvidia this time, I get the feeling they will have the better RT performance. Just hope they do not go silly with the pricing.
 
Looking forward to benchies comparing RDNA2 and Ampere using say Cyberpunk 2077 :D

I am going Nvidia this time, I get the feeling they will have the better RT performance. Just hope they do not go silly with the pricing.

IMO they will... and probably will, "we bigger number on slide!!! So ££££££££££££"
 
Haven’t Sony said their RT is hardware dedicated or something along those lines? If they are in fact using ‘normal’ cores that’s a bit of a porky isn’t it. If cores have been tweaked to make them significantly better at RT, but still with no dedicated RT cores, that would surely be, dependant on performance, the best of both worlds, right?

Yeah, obviously i don't know. But my guess is "porkies" I just don't think those RDNA 2 GPU's have dedicated RT acceleration, they just have a lot of Rasterization muscle...

Sony have developed some kind of hardware power monitor for the PS5 to maintain the high clock speeds. So they could have added something on the Ray Tracing side, but, I am not sure what it could be.
 
Sony have developed some kind of hardware power monitor for the PS5 to maintain the high clock speeds. So they could have added something on the Ray Tracing side, but, I am not sure what it could be.

I suppose they could have a separate chip for it, i just doubt the RDNA 2 GPU has anything like that on die, its not what AMD do....
 
DXR. DXR 1.1 is just an update to that, it's not a whole new API.

Ray Tracing is not new, its decades old. DX12 Ultimate simply adds to a unifying API. Direct X is a feature set, its a layer that connects features to a GPU in an agnostic way.

I was using it long before RTX was a thing.
 
What Nvidia did was bring it to the fore. Great, good job, take nothing away from them, but its not something they invented.
 
The best result is that both vendors have a powerful efficient solution and it gets used in bunches of games. The quicker we get to that point the better IMO.
 
Sony have developed some kind of hardware power monitor for the PS5 to maintain the high clock speeds. So they could have added something on the Ray Tracing side, but, I am not sure what it could be.

This is what most people are forgetting. What have AMD learned from working on the consoles that they can now bring to PC? I will be very surprised if the answer is nothing.
 
...what if AMDs future GPU has the same number of cores IN TOTAL as Nvidia's equivalent GPU. Lets say they BOTH had a total of 3000 cores. But Nvidia's solution dedicates 80 of those to RT and AMD's doesn't. In that scenario, AMD might be faster in raster because they can dedicate all cores to raster, whereas Nvidia cannot...


Radeon 5700, Nav10 251 mm²
RTX 2070 445 mm²
Well, according to this size does matter. As both have the same ALUs, 2304.

I'm looking at scaling/performance forecasting though.
Gist: Nvidia started big while Navi is smaller. As AMD increase die size of Navi the potential is there (based on Uarch advances) to eclipse Nvidia at a smaller/similar area real estate (IE: how many gpus per wafer, performance, etc).

I'm interested in seeing were it stands with whatever Ampere is (rumored: 700 mm²). Will it win if that's true? Who knows... But, if it come close or tie it will do so 195mm2 smaller. And price/cost would be the advantage of having a smaller die in that regard.

AMD can simply afford to charge a lower price and still profit. While Nvidia would throw another higher tiered sku as a replacement for whatever AMD beat in order to compete at price reduction. Which is only killing their own strategies and profit. IE: They could have sold both a 2070 and 2070S at a much higher price. However, do to 5700 series the 2070S is now at the same price as the 2070 (just going on Nvidia's price not AIBs). Ouch!!!!

However, again, manufacturing would be AMD's advantage, something I mentioned earlier. Mindshare is no longer an issue with game developers nor gamers do to consoles and Ryzen.

Some see it that AMD would need a die just as big (better Uarch) in order to compete. However, based on current trends and Uarch advancements if that were the case it's likely AMD performance would far exceed them. Be that as it may things are about to get very interesting.

PS5 Gaming Reveal showcased many games using "ray tracing". None of which required the same brute force method Nvidia wants you to believe as to "how it's done". It can be done in different ways because a game will ALWAYS be rasterized with some elements of ray tracing in it. Which is why it's no surprise that you see it on consoles. But remember...consoles are using lower end Navi gpus compared to what AMD will release by 4th Quarter.

You don't need dedicated hardware to RT, just a smart way of implementing RT in game.
 
Ray Tracing is not new, its decades old. DX12 Ultimate simply adds to a unifying API. Direct X is a feature set, its a layer that connects feature to a GPU in an agnostic way.

You seem to have problem admitting that Nvidia are first to support that. Turing supports DX12 Ultimate.

You said this

AMD are going with Microsoft's API, like the rest of the industry will, Including Nvidia.

Nvidia aren't following in this, everyone else is following them.

Frankly i don't think that's what they are doing, there is no "Dedicated Ray Tracing Hardware" they are simply using Micrsofts RT API which doesn't require dedicated Ray Tracing hardware.


The Microsoft API which doesn't require any dedicated hardware, apart from having support for DX12, is the Software fall back layer. It wasn't in DXR 1.0. That's why Nvidia had to enable ray tracing on it's Pascal cards. With DX12 Ultimate. It's a pure software solution though, no hardware acceleration.

AMD and Nvidia will still have to write software/drivers if they want to enable hardware accelerated Ray Tracing on their GPUs.
 
Status
Not open for further replies.
Back
Top Bottom