• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
The problem with waiting for AMD is that they're going for the value play again. Which means definitely lower performance for ray tracing, DLSS or equivalent will be missing or strictly inferior (let's not forget DLSS 1.0 itself was a disaster) - no tensor cores, the others in the software stack are also going to be worse/missing (RTX Voice, CUDA, etc). Plus some less covered but usual omissions like how crap their video decoding is (important for AVR + PC, or HTPC users).

I just don't see them simply being cheaper and maybe having more vram as rivalling all those advantages. Hope they prove me wrong, they have 2 weeks to start leaking stuff.
They're in the consoles. Both of the main ones. That could be fairly important/crucial.

How many devs are going to spend extra effort pushing more RT on the PC platform? If AMD-powered consoles don't have the RT grunt, how many devs are going to work extra hard to add more just for the PC version..
 
Ah come on man. We are talking about a much better GPU than the current best for around haLf the price. Your post makes you seem like bitter AMD fanboy ;)
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.
 
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.
Agreed. nV might have had a clean sweep were it not for the very stingy 6/8/10 GB VRAM :p

Matching 3080 is all they really have to do, and add some more VRAM and we're good.

If the consoles don't have very much RT power then perhaps we'll see another cycle on PC where only a handful of games actually use RTX, still.
 
It's about effiency and throughput. People need to appreciate its why SMT is a thing - its to make sure you maximise utilisation of the CPU core by replicating some of the front end. In RDNA2 Texturing and RT operations are happening on the same pipeline,but,both of those operations are happening at different stages of the pipeline. So what AMD is doing is making sure you have as much pipeline utilisation as possible. This fits in with the DXR1.1 specification which MS recently released(inline RT),which makes sense as that will be running on the XBox Series X.
I'm not disputing any of that, but it will certainly be worse in terms of RT performance as a result. Specialised hardware is simply better, but it has the down-side of being more expensive and limited in application. No different here.

Also looking at die area is not really useful this generation. Samsung 8NM is a derivative of their 10NM process node. TSMC 7NM is superior in density and power characteristics. AMD might not be as behind in transistor count as we think. Remember the Radeon 7 and RX5700XT were early generation 7NM products,so it wouldn't surprise me if RDNA2 ups density even more. Its quite clear the XBox Series X 56CU GPU seems relatively densely packed(those 56CUs taken up under 200MM2 of the SOC IIRC) and it still runs at a constant 1.8GHZ or thereabouts!
The reason I mentioned the die size was to back-up the judgement on them making a value-play. Smaller dies allow for that sort of thing, and in this case they wouldn't have a choice unlike with the 5700 XT. I don't care about TSMC vs Samsung nodes battles.

They're in the consoles. Both of the main ones. That could be fairly important/crucial.

How many devs are going to spend extra effort pushing more RT on the PC platform? If AMD-powered consoles don't have the RT grunt, how many devs are going to work extra hard to add more just for the PC version..
Them being in the consoles will certainly help, but up to a point. In terms of RT everything has been happening with Nvidia's help for the past 2 years, including all RT engine-developments. Devs don't really have to do much, just allow settings sliders for RT (for ray counts, RT resolution etc) - all of which is trivial to implement, and then we can put our extra hardware power to use. Consoles will still target 4K 30 fps at best (with ray traycing), so it's not like they're going to overboard with it.
 
Them being in the consoles will certainly help, but up to a point. In terms of RT everything has been happening with Nvidia's help for the past 2 years, including all RT engine-developments. Devs don't really have to do much, just allow settings sliders for RT (for ray counts, RT resolution etc) - all of which is trivial to implement, and then we can put our extra hardware power to use. Consoles will still target 4K 30 fps at best (with ray traycing), so it's not like they're going to overboard with it.
If it's so trivial and slider-based, then really you'll get a few extra rays being cast, a bit higher IQ and... that's about it :p

They probably won't rework scenes to add extra dynamic lighting that the console version doesn't have, for instance. Unless nV pays them too.

So your extra nVidia RT hardware is going to give you - not much? - in most console ports. A bit higher IQ.
 
I'm not disputing any of that, but it will certainly be worse in terms of RT performance as a result. Specialised hardware is simply better, but it has the down-side of being more expensive and limited in application. No different here.

MS literally wrote the DXR1.1 specifications around their consoles. Read about inline RT. Literally every multiplatform game will be coded around that form of RT.

You do also appreciate BOTH AMD and Nvidia use specialised hardware for parts of the RT calculations too,its only part of the graphics pipeline which is reused with AMD.


The reason I mentioned the die size was to back-up the judgement on them making a value-play. Smaller dies allow for that sort of thing, and in this case they wouldn't have a choice unlike with the 5700 XT. I don't care about TSMC vs Samsung nodes battles.

7yXaC5G.png


Samsung 8NM is a 10NM derivative:
https://fuse.wikichip.org/news/1443/vlsi-2018-samsungs-8nm-8lpp-a-10nm-extension/

Hence the transistors are physically larger than what is possible on TSMC 7NM,which is probably a half node ahead at least. Samsung 7NM is close but has horrible yields. They have been buying EUV machines left,right and centre to try and fix it.

Even if Nvidia has bigger die,its on a less denser process node,so AMD can actually target a similar number of transistors using a smaller die,if they decide to use higher density libraries.

We will need to see the number of actual transistors first,to see if Nvidia has substantially more than AMD. That is what we need to measure. This is what larger dies,and node shrinks do - give you access to a larger transistor budget.

Even then we need to see if the transistor counts quoted are total or only active transistor counts. Both are very different metrics,as not all transistors are used.
 
Last edited:
If it's so trivial and slider-based, then really you'll get a few extra rays being cast, a bit higher IQ and... that's about it :p

They probably won't rework scenes to add extra dynamic lighting that the console version doesn't have, for instance. Unless nV pays them too.

So your extra nVidia RT hardware is going to give you - not much? - in most console ports. A bit higher IQ.

They don't need to add extra anything or rework scenes so long as they have RT in the first place - that's the beauty of dynamic lighting. As for the results, sure, to a large degree it's diminishing returns, no different than 4K vs 1800p, or Ultra vs High settings in general, etc. That's up to the player to decide if they care or not.

How do you know that though?? MS literally wrote the DXR1.1 specifications around their consoles. Literally every multiplatform game will be coded around that form RT. You do also appreciate BOTH AMD and Nvidia use specialised hardware for parts of the RT calculations too,its only part of the graphics pipeline which is reused with AMD.

I know that because we know what the RT cores do, and what the TMUs do for RDNA 2. In particular the TMUs either do texture work or BVH checking and ray intersections - the RT cores just do RT but better. This is an advantage in consoles because you as a developer have flexibility to fully utilise the hardware, while on the PC side you have to account for more scenarios which may or may not materialise. There's also a bunch of things Nvidia does with caching which is crucial, but I don't want to get bogged down in the details too much. In general it's fairly well accepted that Nvidia's RT cores will be faster for RT than AMD's TMUs.

Have you also not considered,Nvidia's method also will use more die area,but AMD can increase RT power by just scaling up normal shaders?? The problem with having parallel hardware,is of syncing it all. Turing could be bottlenecked by its RT pipelines holding up its rasterisation pipelines.
Well we've already put Turing through the test and it seems to come out looking good. Can't really say that about RNDA 2 yet. The scaling issue is tricky but right now RNDA 2's best looks to be 80 CU and limited to GDDR6 & 384 bit bus, which puts it somewhere around 1.7x 5700 XT, or about 20% faster than a 2080 Ti. And that's just a best case on paper. I don't see how it grows bigger than 80 CUs, at least until next merry go 'round. So either way it's gonna hard for it to tackle the 3080 let alone past it.
 
Yes and reading the forums...

And how many people (like me) don’t post about AMD drivers because they’re too busy playing games because they just work :o

For example: MS flight simulator 2020 - I just downloaded and played it on an AMD driver from May 2020. Zero issues with crash to desktop etc. that many people are experiencing.
 
People always say this about AMD but what do you actually mean? They work fine. Better than fine actually.
Agreed. It's turned into something of a 'fake news' story for me, as I've never had any problems personally and people never seen to qualify the comment with details.

Unless by drivers they're referring to the entire software stack and saying that they have weaker offerings than Nvidia in this area, but I don't think that's what they mean.
 
People always say this about AMD but what do you actually mean? They work fine. Better than fine actually.

A good 5 years ago amd had a rep that the drivers was very bad and its something that has stuck around even after winning a good few years of the driver of the year award they still get bad rep.
Sure navi had issues on release that is now fixed but it wasn't every user having them either it was a select few with certain configuration.

Amd drivers are way above what Nvidia has in every department but people will still disagree.
 
A good 5 years ago amd had a rep that the drivers was very bad and its something that has stuck around even after winning a good few years of the driver of the year award they still get bad rep.
Sure navi had issues on release that is now fixed but it wasn't every user having them either it was a select few with certain configuration.

Amd drivers are way above what Nvidia has in every department but people will still disagree.

I’ve only had AMD cards for the last 7 years. I don’t even think about drivers. I update when it says so, set my overclocks, and just game on it!
 
Agreed. nV might have had a clean sweep were it not for the very stingy 6/8/10 GB VRAM :p

Matching 3080 is all they really have to do, and add some more VRAM and we're good.

If the consoles don't have very much RT power then perhaps we'll see another cycle on PC where only a handful of games actually use RTX, still.

No game I play use rtx or dlss so all nvidia Jensen did was to adress someone else.
RTX is still a gimmick.
This type of change with RT takes a lot of time due to the big market is below $300 markets not $500.
Its why consoles can change this down the line thanks to amd bringing big navi there.

Look at that Scott Herkelman tweet. That's not an accident.

Yea, people bought the RT hype from Jensen selling as it was one of the most try to sell things presentation I seen.

Big Dog Navi coming soon
 
No game I play use rtx or dlss so all nvidia Jensen did was to adress someone else.
RTX is still a gimmick.
This type of change with RT takes a lot of time due to the big market is below $300 markets not $500.
Its why consoles can change this down the line thanks to amd bringing big navi there.



Yea, people bought the RT hype from Jensen selling as it was one of the most try to sell things presentation I seen.

Big Dog Navi coming soon


I didn't really think much of the Nvidia event. There were some interesting small tidbits and the performance of the 3080 looks good. However, so many things made me cringe. The silly "streamers" playing 8k and all you see is this fake or ignorant reaction. The 3090 out of the oven. The futuristic talk. So many cringe moments and then all the overselling just made me want to turn it off. I've been a salesperson myself, and there is good marketing and there are the tryhards. Jensen managed to squeeze into the latter category with me.
 
And I never said anything to the contrary, but so far Ampere doesn't look like the stratospheric leap we suspected, and actually leaves itself (bar 3090) to be a very attainable, nay beatable, target for AMD.

Who was expecting a stratospheric leap? I think considering they are stuck on a inferior 8nm fab process they have done well.

I mean let’s see if AMD can take the advantage off being on the better manufacturing process and actually release a nvidia killer and take out the 3090 by a decent margin meaning even a Titan card won’t be able to beat it.


Look at that Scott Herkelman tweet. That's not an accident.

Yeah, I noticed that too. My expectation is they will easily be able to bring something better than a 3080, but may fall short of beating the 3090. I hope I am wrong and they take the performance crown however.

What makes me laugh is, just like ever time a new gpu is being released people cannot see past the current top card. People kept making it seem like 2080Ti performance was the holy grail and AMD would not be able to beat it, which I pointed out many times makes no sense.


A good 5 years ago amd had a rep that the drivers was very bad and its something that has stuck around even after winning a good few years of the driver of the year award they still get bad rep.
Sure navi had issues on release that is now fixed but it wasn't every user having them either it was a select few with certain configuration.

Amd drivers are way above what Nvidia has in every department but people will still disagree.

5 years? Try 15! This has been going on since ATI days mate. Lol

Until they had dropped the ball last year I was out laughing and defending people who would say AMD’s drivers are rubbish as not only was that not true, they were actually better than nvidia’s. They went and dropped the ball though and now it is fresh in people memories again. Shame.


Can't wait for this "AMD Navi 23 ‘NVIDIA Killer" card.
Why do you hate AMD so much? :p

I didn't really think much of the Nvidia event. There were some interesting small tidbits and the performance of the 3080 looks good. However, so many things made me cringe. The silly "streamers" playing 8k and all you see is this fake or ignorant reaction. The 3090 out of the oven. The futuristic talk. So many cringe moments and then all the overselling just made me want to turn it off. I've been a salesperson myself, and there is good marketing and there are the tryhards. Jensen managed to squeeze into the latter category with me.
I thought their marketing overall was very good considering it was from his kitchen. Lol.

Let’s see if AMD’s new marketing team do as well.
 
Status
Not open for further replies.
Back
Top Bottom