• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Do you also forget that AMD's is already on 7nm, there is no node shrink this time around.
So TSMC's 5nm isn't going into production next month then? I guess i missed that bombshell, Apple will be most unhappy. Zen 4 (possibly a Zen 3+) and RDNA 3 are on 5nm next year (ish). So yes, there is another die shrink available to AMD. You also seem to glance over the fact that AMD are releasing RDNA 2, which with all the improvements is as much of a new architecture as Ampere is to Turing.

So both companies are dropping new arches on new nodes at the same time. Let the pure numbers talk, the playing field is level.

As for Nvidia, arguably they're the ones without the shrink available. They tried to play TSMC and failed so most of Ampere is pegged for Samsung 8nm, which is a smidgen behind TSMC's standard 7nm, let alone EUV, and with AMD taking up most of TSMCs 7nm capacity, only Nvidia's super top end halo cards (3090/Titan) will be produced on 7nm. So they can't move to Samsung 7nm because it's woeful and delayed, TSMC's 7nm production in all forms is full, and Nvidia won't get a look in on TSMC 5nm for years.

I also have to chuckle at your waxing lyrical about Nvidia have dedicated ray tracing hardware like it's somehow any good. It's not. We've had 2 years of seeing just how woeful Turing's beta test actually is. So I would fully expect the RDNA 2 consoles to outperform 2080 Ti in ray tracing because the 2080 Ti is just rubbish in the first place. Consoles will be nowhere near Ampere's ray tracing if the latest information holds true.
 
So TSMC's 5nm isn't going into production next month then? I guess i missed that bombshell, Apple will be most unhappy. Zen 4 (possibly a Zen 3+) and RDNA 3 are on 5nm next year (ish). So yes, there is another die shrink available to AMD. You also seem to glance over the fact that AMD are releasing RDNA 2, which with all the improvements is as much of a new architecture as Ampere is to Turing.

So both companies are dropping new arches on new nodes at the same time. Let the pure numbers talk, the playing field is level.

As for Nvidia, arguably they're the ones without the shrink available. They tried to play TSMC and failed so most of Ampere is pegged for Samsung 8nm, which is a smidgen behind TSMC's standard 7nm, let alone EUV, and with AMD taking up most of TSMCs 7nm capacity, only Nvidia's super top end halo cards (3090/Titan) will be produced on 7nm. So they can't move to Samsung 7nm because it's woeful and delayed, TSMC's 7nm production in all forms is full, and Nvidia won't get a look in on TSMC 5nm for years.

I also have to chuckle at your waxing lyrical about Nvidia have dedicated ray tracing hardware like it's somehow any good. It's not. We've had 2 years of seeing just how woeful Turing's beta test actually is. So I would fully expect the RDNA 2 consoles to outperform 2080 Ti in ray tracing because the 2080 Ti is just rubbish in the first place. Consoles will be nowhere near Ampere's ray tracing if the latest information holds true.

What are you talking about? This is another example of you jumping into a conversation to cause an argument with me, to try and put me down without actually understanding the conversation at all.

AMD are on 7nm now, Nvidia are on 12nm now. AMD's RDNA 2 will be on a slightly improved 7nm node. It's not a die shrink. Don't you get that? Most of the performance increase is going to be from the new architecture. That's facts.

Now Nvidia, their cards are on 12nm. We have no solid info yet about what manufacturing process they are going to be using. The only solid facts we have is that Nvidia used TSMC's 7nm for their GA100 card. They might be using Samsung 8nm or TSMC 7nm, it's all rumours so far. More than likely the GPU production will be split between the two and that's the reason for the different rumours. In either Case Nvidia's ampere cards will be on a die shrink.

Are you still following? Quick summary.

AMD - No die shrink but new Architecture.
Nvidia- Die Shrink + New Architecture.

Die shrink always brings better performance than new architecture. Just look through your GPU history.

Your second paragraph, I shouldn't really comment on it. Sounds like you just making stuff again to be honest. Nvidia have pre-booked TSMC 5nm for Hopper, that was reported a couple of months ago.

Waxing lyrical about Nvidia's dedicated hardware solution for Ray Tracing? Can't say that I have. Just talking facts. Hardware dedicated to a certain function in most cases performs better than a hybrid solution. I don't think you understand how much processing power Ray Tracing needs because you guys keep saying it's terrible. I agree the price was terrible but how can you say their solution was woeful? Ray Tracing is extremely demanding, it's only now that GPUs can even attempt it. AMD's solution looks interesting from the Patents and will probably be the best solution down the line when GPUs are powerful enough to have Ray Tracing and Rasterisation using the same resources.

How powerful do you think the new consoles are going to be? You think they are going be more powerful than the 2080Ti? And I don't mean in Ray Traced performance, just ordinary raster performance.
 
Do you also forget that AMD's is already on 7nm, there is no node shrink this time around.

So TSMC's 5nm isn't going into production next month then? I guess i missed that bombshell, Apple will be most unhappy. Zen 4 (possibly a Zen 3+) and RDNA 3 are on 5nm next year (ish). So yes, there is another die shrink available to AMD.

So both companies are dropping new arches on new nodes at the same time
As melmac is clearly referring to Big Navi, are you saying that Big Navi is on 5nm? If so any proof or is it speculation?
 
I also have to chuckle at your waxing lyrical about Nvidia have dedicated ray tracing hardware like it's somehow any good. It's not. We've had 2 years of seeing just how woeful Turing's beta test actually is. So I would fully expect the RDNA 2 consoles to outperform 2080 Ti in ray tracing because the 2080 Ti is just rubbish in the first place. Consoles will be nowhere near Ampere's ray tracing if the latest information holds true.

For all their faults Turing's ray tracing capabilities are a generational leap calling it just rubbish is just rubbish even if things are lacking gaming wise at the moment - doing it on the shaders is at least 6x slower like for like. I'll be surprised if the consoles outperform 2080ti's RT performance unless AMD have some additional tricks up their sleeve - the approach is basically alleviating some of the reasons why shaders are so poor for it rather than going for a best possible solution.

It is going to be funny how quickly some people change their tune once AMD has a decent RT solution and games start making proper use of such features rather than just token use for specific effects.
 
AMD - No die shrink but new Architecture.
Nvidia- Die Shrink + New Architecture.

Die shrink always brings better performance than new architecture. Just look through your GPU history.

It is less about the shrink and more about the number of transistors. Usually you get 400+mm^2 GPUs and generally the only way to increase performance from there is to do a node shrink so you can add in more transistors. This go around for AMD there is no 400+mm^2 GPU so all they really need to do to get a similar performance uplift to what a node shrink does is release a 400+mm^2 GPU. I think the main thing stopping AMD from releasing such a card with RDNA 1 is that they would need to reduce clock speed to get it within a 300W power envelope. With RDNA2 provided the +50% perf/watt is accurate then they can get a doubling of Navi 10 into a 300W envelope with similar clockspeeds which should lead to a significant performance uplift provided workloads can scale across that many CUs.
 
Isn't the sweet spot most are after

1440p
144 FPS+
Circa £600
and in my case 'Quiet!'
4k60 is the sweet spot for me as a primarily RPG and sim gamer, but then I've never experienced high refresh fps gaming etc.

4k60 ultra details AAA titles (MFS 2020) and I'm over the line.
 
I'll be shocked if consoles don't outperform 2080TI RT performance.

How and why? You do realise that you are comparing an APU to a full desktop core GPU.

I just don't see it happening, and released trailers already have shown that devs need to use RT on console wisely.

My point of view on all this yes next generation of the consoles will be a nice upgrade over the current consoles. But they will fall short of what the pc GPUs now has to offer.

RDNA2 on the console will not be the same for RDNA 2 on the desktop, the desktop will be a much higher clocking GPU with far greater performance for both RT and normal gaming.
 
Um, no, unless you want to bring up that old chestnut of the top-end, over-engineered AIB models costing more than the bargain basement, bottom-tier with barely-adequate coolers comparison. Yes, the Sapphire Nitro costs more than the KFA2 basic, but the AMD reference costs less than both.

Still inhaling?

You can't compare a crappy blower stock clocked ref card, to a non ref custom OC'd card, unless we're doing that now ?

If so, i apologise :p


The PS5 should beat the 2080Ti in RT, as from what we've seen, its RT is a blury mess, as its only doing it at 1080p. :p

and theres everyone slamming DLSS for its pi$$ poor detailed loss blury quality :D
 
Last edited:
The Xseries X and PS5 RT won't even get close to a 2080ti. We already have a comparison in RT performance in case you guys forgot. X series X ran Minecraft RT at only 1080p 30 fps (and XsX gpu is stronger than ps5s), which is way worse than a 2080ti. Even assuming non final code, optimizations won't make it magically gain that much more. It's most likely around 2070 performance tops.

Also :
GT 7 - used only for reflections, it's not even clear if its in gameplay too or just garage/replays and RT was running in 1080p while game at 4k/60.
Ratchet and clank - only reflections and only static objects, dynamic ones werent being reflected
Also no reflection roughness variation ( the performance intensive kinds of reflection ), only mirror ones which are cheapest to render.

No RT GI, AO - the more expensive effects (except Pragmata - 1080p only and noisy as hell).

Clearly weak RT capabilities in next-gen consoles, 2070 might even be a stretch.
 
Last edited:
The Xseries X and PS5 RT won't even get close to a 2080ti. We already have a comparison in RT performance in case you guys forgot. X series X ran Minecraft RT at only 1080p 30 fps (and XsX gpu is stronger than ps5s), which is way worse than a 2080ti. Even assuming non final code, optimizations won't make it magically gain that much more. It's most likely around 2070 performance tops.

Also :
GT 7 - used only for reflections, it's not even clear if its in gameplay too or just garage/replays and RT was running in 1080p while game at 4k/60.
Ratchet and clank - only reflections and only static objects, dynamic ones werent being reflected
Also no reflection roughness variation ( the performance intensive kinds of reflection ), only mirror ones which are cheapest to render.

No RT GI, AO - the more expensive effects (except Pragmata - 1080p only and noisy as hell).

Clearly weak RT capabilities in next-gen consoles, 2070 might even be a stretch.

And this where Nvidia will focus their marketing to justify their high pricing.
 
I think Nvidia GPU's will have better RT performance, i don't think they will be better in rasterized performance.

If this is the case, it will be interesting to see how much RT is actually worth to the masses. The 2080Ti's performance should have come in at the 1080Ti's price point, but Nvidia jacked up the price "because ray tracing."
If AMD can match a given Nvidia skew on rasterized performance and cut them off at the knees on price, I will be happy to buy the AMD card. (For a decent upgrade over my 1080ti)

In fact, if AMD offered 2080Ti performance at $650 *but no ray tracing* two years ago, Nvidia could have bragged about ray tracing all they wanted, but they would not have sold many $1200 2080Ti's.
 
Status
Not open for further replies.
Back
Top Bottom