• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
As for the ray tracing stuff... I didn't see the value last gen at all, same with DLSS. It just isn't in enough games yet. I'm not convinced with the incoming generation either as I just don't feel the fidelity offered is there yet... it's not a must have for me especially considering the hit to frame rate. The new consoles will force it into more games but as for how good it will look? I need more convincing.

Perfect mature rationale, from this guy.

If you want to join the rush to adopt it fine, you dont have to preach to everyone that they need it - thats right out of Jensen's book, playing into his hands.
 
Always moving goalposts.

So multiple people proving what you're saying is hogwash is now "moving goalposts"? Jog on, pal. Post again when you have something constructive to say.

:rolleyes: especially the content of posts, its basically trolling or trying to stir up ****. Yet if myself or TheFury post like this the mods parp over like the flying doctors lol.

Typical childish response when faced with cold hard facts.

Your a troll and a not very good one at that.

Quit trying to stir up controversy in this AMD thread, its more than obvious it is what you are doing, and equally as transparent you are a blatant Nvidia fanboy.

What's that @chris85oc - the 2060? OK check this guys post with a 2080Ti:
I am using a 2080 ti SLi setup and I don't like the performance drop with RTX on...I won't be enabling RTX on Ampere or RDNA2 cuz it's going to be the same story... I won't be possibly using RTX till the performance drop is less than 2%... which might take 3 or more generations of upgrades (I doubt this, the problem seems intractable)..RTX is a waste of computational power which could have been used elsewhere. I am upgrading my knowledge about RTX in the meanwhile..but it's looking nothing more than a grunt approach to IQ
 
Perfect mature rationale, from this guy.

If you want to join the rush to adopt it fine, you dont have to preach to everyone that they need it - thats right out of Jensen's book, playing into his hands.

Isn't it usually the 3rd iteration of a new technique where it starts working flawlessly, or at least close to flawless? RTX 2000, introducing ray tracing, get games to start using it, RTX 3000, improve performance a bit but still waiting on games to incorporate it efficiently, then RTX 4000 where games now use it properly and the hardware can run it a lot better. So it's next year/2022 where we'll get ray tracing with little performance loss, it'll be interesting to see what AMD do, RDNA3 is meant to be MCM, splitting parts like Ryzen so we'll probably have a die for the shader units, a die for the I/O and possibly a die for ray tracing if they want to push it hard.
 
Isn't it usually the 3rd iteration of a new technique where it starts working flawlessly, or at least close to flawless? RTX 2000, introducing ray tracing, get games to start using it, RTX 3000, improve performance a bit but still waiting on games to incorporate it efficiently, then RTX 4000 where games now use it properly and the hardware can run it a lot better. So it's next year/2022 where we'll get ray tracing with little performance loss, it'll be interesting to see what AMD do, RDNA3 is meant to be MCM, splitting parts like Ryzen so we'll probably have a die for the shader units, a die for the I/O and possibly a die for ray tracing if they want to push it hard.

Yeah so guys that attack people for referring to current gens like the 5070XT seem to be missing the mark. Even the 30 series is not quite there and its only just out. You cant bash people for not committing to it, its not ready and nowhere near enough games have it to be a thing. I wont be upgrading to this gen for Raytracing at all. It will be so I can play at 4k or drop to 1440p high smooth settings until cards can actually utilise the technology properly.
 
There actually were a few rumours from people that some of the driver issues were caused by defects in some of the hardware which is why it has taken so long to fix some of the problems
Unfounded rumors. Which is why I asked for the source that he alleged amd said themselves that it was a the issue.

As of now there isnt any.
 
Last edited:
I'm reluctant to make a thing of this because people in here have a habit of holding it against you when you get your speculations wrong, but, count the CU's in this slide, ignore the blue tab on the side actually count the CU's on the diagram. 20 in each array. 8 arrays.

Mi8iJu7.jpg.png

:D

I think I have ignored the cpu instructions dispatching pipeline..couldn't see something of that sort delineated in the Xbox dieshot.. probably that needs to be sized as well.. what would be a good approximation?

But there is also the possibility that 536 mm2 is pure BS.. or the RGT dude is on the money with his infinity cache leaks... just too many factors.. should listen more to peaceful music.
 
I think its safe to say the only thing we know is we dont know anything haha

Until i see official specs and benchmarks im erring on the side of caution... While i think that idea of being able to use some CU for Ray Tracing is possible, im not going to believe it unless i see more leaks reporting it :)
 
:D

I think I have ignored the cpu instructions dispatching pipeline..couldn't see something of that sort delineated in the Xbox dieshot.. probably that needs to be sized as well.. what would be a good approximation?

But there is also the possibility that 536 mm2 is pure BS.. or the RGT dude is on the money with his infinity cache leaks... just too many factors.. should listen more to peaceful music.

No idea how much space CPU instructions dispatching pipeline would take up.

Looks like the GPU portion with the 384Bit IMC takes up about 70% of the Xbox Series X die, leaving 252mm2 for the GPU, with the IMC, rough approximation, it has 28 Dual CU's on die, 56 CU's., if we double that its 504mm2 with 112 CU's, but it doesn't have a 768 Bit IMC, how much space does the IMC take up? About a quarter of the Shaders? 14 CU's, add those to the 112 = 126 CU's, if Sienna Cichlid is 536mm2 we have 32mm2 spare die space, is that 34 CU's worth? Probably not.

Sienna Cichlid could be on a more dense Node, certainly there is a more dense TSMC 7nm node than what the Xbox Series X is on, which is the same transistor density as Navi 10, that is an "enhanced 7nm Node" compared with the original 7nm node Zen 2 is on, but its not the best 7nm node.

Something is up with the 80 CU rumour because we now know it like the Xbox Series X has dual CU Work Groups and assuming its on the same older 7nm node as the Xbox Series X and Navi 10 an 80 CU Sienna Cichlid would only be around <350mm2 in size.

It could be smaller, it could be <350mm2 and only have 80 CU's, if it is its still a small GPU on an older node and AMD are not trying.
 
Last edited:
While i think that idea of being able to use some CU for Ray Tracing is possible

I have read the hybrid ray tracing patent.. and am somewhat confident that the leak isn't making sense.. AMD will rely on fixed function ray triangle intersection hardware and the shaders are just supposed to control the traversals..looking at the schematics that have been leaking it seems the dedicated hardware will either be inside the CU or the cluster, it's a bit nebulous
 
I have read the hybrid ray tracing patent.. and am somewhat confident that the leak isn't making sense.. AMD will rely on fixed function ray triangle intersection hardware and the shaders are just supposed to control the traversals..looking at the schematics that have been leaking it seems the dedicated hardware will either be inside the CU or the cluster, it's a bit nebulous

Yeah plus why would they leak hardware info a month before the announcement, that tweet is just wrong.
 
I have read the hybrid ray tracing patent.. and am somewhat confident that the leak isn't making sense.. AMD will rely on fixed function ray triangle intersection hardware and the shaders are just supposed to control the traversals..looking at the schematics that have been leaking it seems the dedicated hardware will either be inside the CU or the cluster, it's a bit nebulous

Its obvious if you look at how the Xbox Series X or PS5 does it, take the Series X as an example it has dual CU workgroups, dual purpose, each pair of individual CU can do Shading or Ray Tracing, in a game without Ray Tracing all of its active 52 CU's would be given over for shading, in a game with Ray tracing a portion of those 52 would be given over for Ray Tracing.

With that you can have upto 52 Ray tracing cores, when Sony quoted a Ray Tracing throughput number it was MUCH higher (multiples higher) than the 2080TI but this was dismissed rightfully as bluster because you would obviously never use your entire shader array for Ray Tracing.

But what it means is you can use as many or as few shaders for Ray Tracing or Shading as you like with the potential for huge throughput for either or.
 
Unfounded rumors. Which is why I asked for the source that he alleged amd said themselves that it was a the issue.

As of now there isnt any.
AMD will never admit to there being hardware bugs in Navi 1 that contributed to driver problems. I've tried to find a source but no, it won't happen. It's going to remain a rumour or maybe even escalate to an open secret, but it will never be confirmed.
 
LLLLLadies and gentlemen, my names Paul, and in this redgamingtech.com video......


:D
Is it me or has he slightly changed the way he speaks, slightly higher pitch and more clear? Could be the effect of not seeing his face on screen though :p
 
Status
Not open for further replies.
Back
Top Bottom