• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** The AMD RDNA 4 Rumour Mill ***

Why do you say Navi 48 would top out at 7800xt levels?

Apparently AMD are only focusing on low and mid tier for the 8000 series and supposedly the performance will be around 7800XT levels at the top end of the stack but a fair bit cheaper, Lower power consumption etc... which isn't a bad thing IMO.

It'll basically be the 5700XT all over again in terms of performance with the proper big boy big Navi only coming the gen after.
 
Last edited:
Mid tier for the next generation, the thinking is:

7900 XT > 8800 XT
7800 XT > 8700 XT
Pretty underwhelming really! Wasn’t the 6700xt 25% faster than the 5700xt. This means the 7900xtx should be 8800xt and and 7900 GRE should be 8700xt BUT these companies are loving it right now and will do as they please for profit.
 
Pretty underwhelming really! Wasn’t the 6700xt 25% faster than the 5700xt. This means the 7900xtx should be 8800xt and and 7900 GRE should be 8700xt BUT these companies are loving it right now and will do as they please for profit.

Depends on price.

At 1440P the 7900 XT is 28% faster than the 7800 XT and 17% faster than the 7900 GRE

If it lands at £500 with 2X the RT throughput i think it will be a hit.

 
Just a theory but I'm guessing AMD are using RDNA4 as a testing bed for their new RT features/tech, If they prove successful then they will do a proper successor to the 7900XTX in the form of a full fat 9900XTX monster with RDNA5.
Aren't they supposed to be considering renaming their next architecture so that it isn't even called RDNA (because of how different it will be)?

If that's true, I'd be surprised if they used RDNA 4 for much testing, more likely just to buy time while they develop the replacement for RDNA.
 
Last edited:
Does anyone who has a grasp of the tech have any idea how much closer these changes will help in bringing rt closer to Nvidia ? I am glad I bought a 7900 gre and will skip this release but I will be purchasing a ps5 pro so will be interesting to see this tech in action. I guess this bodes quite well for RDNA 5 as well ?
 
Last edited:
Does anyone who has a grasp of the tech have any idea how much closer these changes will help in bringing rt closer to Nvidia ? I am glad I bought a 7900 gre and will skip this release but I will be purchasing a ps5 pro so will be interesting to see this tech in action. I guess this bodes quite well for RDNA 5 as well ?
Afaik, the main problem with RDNA is that RT is still largely done with the core hardware, which is why performance tanks when RT is enabled. Whereas, nvidia and Intel offload more of it.

If AMD can accelerate more of the ray tracing in hardware and/or improve the overall efficiency of doing it, then it will bring them closer to the competition without making major design changes.

That said, I have no idea what any of those features actually mean. I would be surprised if RT performance changed that much though, especially since we were told it would with RDNA 3.
 
Afaik, the main problem with RDNA is that RT is still largely done with the core hardware, which is why performance tanks when RT is enabled. Whereas, nvidia and Intel offload more of it.

If AMD can accelerate more of the ray tracing in hardware and/or improve the overall efficiency of doing it, then it will bring them closer to the competition without making major design changes.

That said, I have no idea what any of those features actually mean. I would be surprised if RT performance changed that much though, especially since we were told it would with RDNA 3.
RDNA 4 Ray Tracing Improvements
 
Does anyone who has a grasp of the tech have any idea how much closer these changes will help in bringing rt closer to Nvidia ? I am glad I bought a 7900 gre and will skip this release but I will be purchasing a ps5 pro so will be interesting to see this tech in action. I guess this bodes quite well for RDNA 5 as well ?
Tbf there's nothing 'wrong' with AMD's RT tech as long as it's programmed for agnostically. RTX is what kills it, purely optimised for Nvidia and a lot of black box **** going on that hinders AMD more than helps (this is Nvidia we're talking about remember, done some scummy tricks in the past and they haven't really changed from this!).
 
Last edited:
Tbf there's nothing 'wrong' with AMD's RT tech as long as it's programmed for agnostically. RTX is what kills it, purely optimised for Nvidia and a lot of black box **** going on that hinders AMD more than helps (this is Nvidia we're talking about remember, done some scummy tricks in the past and they haven't really changed from this!).

You mean gimped with low resolution, less bounces and less casting of RT etc. :p

Nvidia rtx sponsored games actually run pretty well on rDNA 3, they perform where they should i.e. matching ampere more or less, the only ones which run really bad is the remix stuff.

I suspect amd will match ada next time but then Nvidia will be even further ahead, after all they have got years head start on this and done a lot of the heavy lifting.

Of course when hardware isn't used and it's just software based for example lumen in ue 5 then things become more equal but that's because Nvidia GPUs aren't getting fully utilised.
 
Of course when hardware isn't used and it's just software based for example lumen in ue 5 then things become more equal but that's because Nvidia GPUs aren't getting fully utilised.
Agreed here, I like lumen for this reason and am hoping AMD just have a stronger utilisation of it rather than Nvidia taking their eye off the ball for all that lovely AI lolly $$$$$$$$$$$$$$$$$$$$$$!! ;)
 
Tbf there's nothing 'wrong' with AMD's RT tech as long as it's programmed for agnostically. RTX is what kills it, purely optimised for Nvidia and a lot of black box **** going on that hinders AMD more than helps (this is Nvidia we're talking about remember, done some scummy tricks in the past and they haven't really changed from this!).

If rumours hold true its going to be quite interesting.

Sony Said "Ray Tracing Performance 2X to 4X faster" its where the 3X comes from, but some of that will also be down to the core count difference, so drop it down to 2X.

Ok so RT performance is like bandwidth, the more of it you have the less FPS you will lose by running RT.

Cyberpunk.
7900 XTX 125 FPS
4080 119 FPS

Cyberpunk with RT.
7900 XTX 40 FPS (-68%)
4080 59 FPS (-50%)

The 4080 has 36% more bandwidth.

If we take a doubling of bandwidth as read then an RDNA 4 GPU would only lose 34%.

Cyberpunk with RT.
RDNA 4 82 FPS (-34%)
4080 59 FPS (-50%)

RDNA 4 has 47% more bandwidth

I almost didn't want to write this as it seems too incredible, i have no idea how this would work, it just makes sense to me, it would be fun if this is how it works out, but, well lets see....

 
Last edited:
Agreed here, I like lumen for this reason and am hoping AMD just have a stronger utilisation of it rather than Nvidia taking their eye off the ball for all that lovely AI lolly $$$$$$$$$$$$$$$$$$$$$$!! ;)

It will be interesting to see how things go on the RT front, more so than ever before, there are now so many factors at play and to consider and it's not just about performance but also the IQ now, right now not only do nvidia have the lead for performance in RT but they are also considerably ahead in the image quality too by providing things like ray reconstruction (which is also getting applied/used in UE 5 lumen now) then we will have things like what we have seen with avatar where both hardware and software RT methods are present so if hardware RT isn't detected or/and when using lower settings, it fall backs to software based methods of RT.
 
Tbf there's nothing 'wrong' with AMD's RT tech as long as it's programmed for agnostically. RTX is what kills it, purely optimised for Nvidia and a lot of black box **** going on that hinders AMD more than helps (this is Nvidia we're talking about remember, done some scummy tricks in the past and they haven't really changed from this!).

Sorry, but you have been reading too many posts from the AMD defence force here. While there is nothing wrong as such with AMD's solution, it's more elegant and makes better use of Die space. It just doesn't have the performance. GPUs just aren't powerful enough yet for a hybrid solution. Whereas Nvidia compromises die space and some raster performance, it has dedicated cores that are only doing RT. And before someone comes in and shouts at me saying that AMD does have dedicated RT hardware, they do and they don't. The Ray Tracing accelerators only handle some of the Ray Tracing calculations. In the Future, as GPUs get more powerful, I can see both AMD and Nvidia moving back to a hybrid design.

Which basically boils down to this. The more actual Ray Tracing/Path Tracing a frame uses the worse AMD's performance will be compared to Nvidia's. It's as simple as that. Don't let all the fanboy noise here confuse you. The only way for AMD to close that gap with it's current Ray Tracing Method is to massively increase the number of CU's and cache sizes while significantly reducing both cache and memory latency.

Tbf there's nothing 'wrong' with AMD's RT tech as long as it's programmed for agnostically. RTX is what kills it,

To be honest, I am very surprised to see you make a statement like this. The games that AMD seem to do well in Ray Tracing are games that have very limited Ray Tracing. And I don't understand your RTX comment. I thought you were more clued in than that.

Please tell me that you aren't really from that moronic school of thought that thinks Ray Tracing/Path Tracing is been overdone in some games?
 
I don't know where this reasoning that Nvidia has "dedicated RT cores" and AMD doesn't, it is not correct, it is not how this works, they both have "dedicated RT hardware" there is nothing emulated about AMD's RT, it is physical.

The difference is in how Nvidia and AMD build BVH, this is really over simplified because i'm not going to write a wall of text to explain this, AMD Construct BVH over many branches, Nvidia do it over a very wide tree, this is a bit like 8 slow cores vs 4 fast cores, both can be equally as fast but not by the same method.

The advantage of the wide approach is it doesn't really matter which BVH construction you code for you will always get the most out of being wide, the disadvantage wide requires more caching, its why Ada has so much L2 cache, the advantage of the branch approach is you don't need so much cache, but unless you're specifically going to code for that its going to be slower.

Now i's sure AMD's thinking was keep the die size down, it doesn't matter as we own consoles they are going to code for us, hmm... well they don't have to and if the studio is packed with Nvidia cards they aren't going to.

Also, and game that AMD does RT well in must be fake RT, no, not necessarily.
 
Last edited:
Back
Top Bottom