• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Associate
Joined
25 May 2012
Posts
161
Big Navi will be fast, faster than a 2080TI, about 40% faster.

But Ampere will be even faster. Nvidia know RDNA2 is good, very good. But Nvidia never lose... they will pull out all the stops and just go mahoosive to beat AMD.


I think he's right, when you look at the performance of the new XBox at 1.8Ghz with 52 CU's, its between a 2080 Super and a 2080TI, so we know the performance is there even on a relatively small cut down GPU and when you look at the PS5 clocking their 36 CU GPU to over 2.2Ghz we know the thing clocks.... there is a benchmark in the wild where an unknown AMD engendering sample crushed a very high clocked 2080TI by 30%.

But Nvidia will stop an nothing just to beat AMD and they will with Ampere.

It doesn't matter, the important thing is AMD will bring truly great GPU's back to the fight.
NVIDIA have been milking us for the past decade or so. I am sure they have enough cash in the reserves to pull out a beast that AMD cannot beat. However, if AMD brings affordable reliable high performance cards, they may win majority of the customers rather than the consumers who are always seeking the latest and greatest no matter what the cost is. I have faith in AMD but I am lacking the patience since my GPU upgrade is due next gen. Hopefully they are on to another Ryzen but in the GPU industry.
 
Soldato
Joined
8 Jun 2018
Posts
2,827
All good, just one thing, 5700... not 5700XT, the comparison they used was 5700 vs RTX 2070, they both have 2304 Shaders, the 5700XT has 2560 Shaders. :)
Ah yes thanks. Corrected. It just doesn't look right unless you add the xt sometimes. ;)

So I make a point, you counter the point, I counter your counter and now suddenly I don't make sense? One last time: RTX is Turing's enterprise and research technologies forced onto the gaming sector as nothing more than an excuse to raise prices astronomically and an excuse to nickel-and-dime the user with lacklustre specifications. Ampere is doubling-down on the fallacy because Nvidia have painted themselves into a corner.

If you disagree then fine but I fail to see how you not understand.
You are correct. I think he's debating semantics regarding use case scenarios. Dlss is not limited to just RTX. However that is neither here nor there.

Ultimately Nvidia messed up. Those tensor cores, Etc should have been on a separate daughter card using hbm3 for the bandwidth in my opinion. Then allow compatibility with AMD cards. This would have solidified Nvidia IP for there version of Raytracing. But we know they will never do that. Let us not forget Ageia. Which is were all this started with their PPU cards bring "unsupported" abruptly. Which was faster then their cards they put the IP on. I bet those PPU cards still run circles around those physx effect today.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
91,051
Which was faster then their cards they put the IP on. I bet those PPU cards still run circles around those physx effect today.

Physics in games has barely moved on in ages :( but anything GPU wise beyond roughly a GTX260 is faster for game rendering + physics than using a GPU + the original PPU.
 
Soldato
Joined
26 Sep 2010
Posts
7,154
Location
Stoke-on-Trent
There's a fine line between pushing technology for what can be done in the future and foisting unnecessary silicon for exorbitant money onto the market. I know you can't just keep smashing CUDA cores onto a die for ever and ever, sooner or later that will reach its limit. But right here and right now for what Turing and Ampere's kit is actually used for in gaming, it is just a waste of die space, and we're expected to pay a premium for it. For the limited use case (and limited performance to boot) Turing's repurposed research tech brings, I'd rather just pay for more CUDA cores, maybe then I could get the mythic 4K 120Hz that's many years overdue without ******* about upscaling potato resolutions using AI cores.

But then that's Nvidia's business model: make huge dies on cheap processes to maximise profits. That's only going to get you so far.
 
Soldato
Joined
25 Nov 2011
Posts
20,639
Location
The KOP
I can see the arguments already... "But But DLSS, you should be running it at 1080P and unscaling to 1440P because bar charts!!!!!"

DLSS requires the game developer to put it in game and its not as if AMD don't have their own compression technologies to boost performance, they do and no one is going to complain the bar charts are fake because X or Y reviewer didn't use them.

Like DLSS requires developers to take use so will Amd when using DirectML from Microsoft.

The question a developer ask themselves do I use DLSS for one vendor or do I use DirectML and use it for everything :)
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,508
Location
Greater London
Didn’t Nvidia say DLSS 2.0 does not need any work from the developer? Could be wrong.

We will see how it does, but I get the feeling unless they talking rubbish like they were about DLSS, then DLSS 2.0 will do well. I am sure they will sponsor big games and have it in them worst comes to worse.

Will be interesting to see how it goes :)
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,508
Location
Greater London
Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
Yeah. Let’s see how that turns out. It all sounds promising, but as usual I will believe it when I see it.

I remember being very impressed and hyped after seeing the Turing reveal. But then slowly it became laughable. I do think they will do a lot better this time around though. These will be the true first gen RTX cards as far as I am concerned, Turing was paid beta testing :p
 
Associate
Joined
25 May 2012
Posts
161
Apparently DLSS 3.0 with Ampere works with anything that's running TAA, but for best results there's still a game driver required.
If they make game drivers for modern demanding titles, Nvidia could be onto a winner. I doubt they will be able to throw out game drivers for every single title.
 
Soldato
Joined
14 Aug 2009
Posts
2,755
None of the AI in games is sophisitcated enough it can't be done on CPU.

AMD's Demo Froblins, back in the HD4xxx area, had thousands of AIs accelerated on a card (about 1TF back on the day), that was also doing 3d graphics. Those NPCs were given a task and were able to perform it while also avoiding dynamically generated obstacles and dangers on the way. In today's terms, probably could be done using tens of thousands of NPCs. Just imagine GTA, Witcher, Cyberpunk, Watch Dogs, Skyrim could have been... :)
 
Soldato
Joined
8 Jun 2018
Posts
2,827
I think some are forgetting a very serious aspect of diminishing returns. Nvidia lost their efficiency per mm2 a long time ago now. At which point they already reached a peek on how large a die becomes before cost and price becomes consumer prohibitive. They want to offset that by marketing RTX. Based on my observation the smaller nv gpus are the less efficient they become. Thus needing more transistors to maintain a competitive edge. No one really questions this method (in reviews) because "faster is faster".

We are now looking at well over 250 watt gpus. Yet no one blanks an eye, unless it's AMD of course :p. Never taking note of the actual cost to manufacture. In a nutshell more working chips on a wafer = less cost. Less cost trickles down to the consumer pricing. Nvidia can't do that because of die size and stock pricing mandate higher price point.

All AMD needs to do is get close in performance but using a smaller die which = more dies per wafer which = lower cost. This is how AMD is beating Nvidia. Not on performance but in manufacturing. They are R&D'ing gpu dies that get close to what nvidia offers in a much larger die. Yet save more per wafer doing so. Therefore could pay more for a wafer which is offset based on higher yields. Pretty clever if you ask me.

So will Big Navi be the chip to bet 3080ti? I believe it will be competitive but at a much lower production cost and asking price. Remember this is Big Navi. We still have this Navi Killer yet to be announced and released. As well as how AMD plays their cards on those consoles games.
 
Associate
Joined
30 Jan 2016
Posts
75
I think some are forgetting a very serious aspect of diminishing returns. Nvidia lost their efficiency per mm2 a long time ago now. At which point they already reached a peek on how large a die becomes before cost and price becomes consumer prohibitive. They want to offset that by marketing RTX. Based on my observation the smaller nv gpus are the less efficient they become. Thus needing more transistors to maintain a competitive edge. No one really questions this method (in reviews) because "faster is faster".

You what? A 2080Ti uses 250watts on a 775mm² 12nm chip compared to the 5700XT which uses 225watts on a 251 mm² 7nm chip how did you come to this conclusion for efficiency per mm²?
 
Soldato
Joined
19 Dec 2010
Posts
12,026
I think some are forgetting a very serious aspect of diminishing returns. Nvidia lost their efficiency per mm2 a long time ago now. At which point they already reached a peek on how large a die becomes before cost and price becomes consumer prohibitive. They want to offset that by marketing RTX. Based on my observation the smaller nv gpus are the less efficient they become. Thus needing more transistors to maintain a competitive edge. No one really questions this method (in reviews) because "faster is faster".

What are you talking about?

Nvidia lost their Efficiency per mm2 a long time ago? That they need to pack on more transistors to maintain a competitive edge? Lets look at those claims by comparing some of the like for like GPUs.

28nm Fiji and 980ti.

Fiji: - 596mm2 and 8.9 Billion Transistors
980 T:.- 601mm2 and 8 Billion Transistors.

So Nvidia has the edge here. And not only has the edge in efficiency per mm2 but also has better performance and consumes a lot less power.

14/16nm: Pascal and Vega.

Vega 64: - 486mm2 and 12.5 billion Transistors.
1080: - 314mm2 and 7 billion transistors

So, Nvidia is way more efficient per mm2. Performance is about the same with Vega getting a little better towards the end. I am also going to say power consumption is roughly the same too as you can under volt the Vega cards.

The current situation is that Nvidia are still on a 12nm process and AMD are on a 7nm process. Also, AMD's current GPUs lack any support for any of the Directx 12 Ultimate features. So you won't be able to make any meaningful comparisons about efficiency until both companies are on the same die size and pushing the same features (DX12 ultimate)

We are now looking at well over 250 watt gpus. Yet no one blanks an eye, unless it's AMD of course :p.

I think most people on this forum don't mind the power consumption as long as the performance is there. And that's where AMD have been failing. Their GPUs use a lot more power for less performance than their Nvidia counterparts. Or to put it another way, if AMD came out with a 250Watt GPU that smashed the 3080TI, nobody here would care that it used 300Watts.

In a nutshell more working chips on a wafer = less cost. Less cost trickles down to the consumer pricing. Nvidia can't do that because of die size and stock pricing mandate higher price point.

All AMD needs to do is get close in performance but using a smaller die which = more dies per wafer which = lower cost. This is how AMD is beating Nvidia. Not on performance but in manufacturing. They are R&D'ing gpu dies that get close to what nvidia offers in a much larger die. Yet save more per wafer doing so. Therefore could pay more for a wafer which is offset based on higher yields. Pretty clever if you ask me.

So will Big Navi be the chip to bet 3080ti? I believe it will be competitive but at a much lower production cost and asking price. Remember this is Big Navi. We still have this Navi Killer yet to be announced and released. As well as how AMD plays their cards on those consoles games.

This whole section of your post is inaccurate. Oh, what you are saying is correct about more dies per wafer been cheaper etc. But, the basis for this is all wrong. You are looking at Navi cards on a 7nm process and comparing them to Turing on a 12nm process. Do you really think that AMD's big Navi will be competitive with the 3080Ti and still be the same small die size as the 5700XT? Not a hope.

We can even work it out. The Xbox series X has an RDNA 2 GPU that's supposedly about the same performance as the 2080 super(545mm2). The GPU in the Xbox is 360mm2. But again, that's 12nm vs 7nm. So if the 3080Ti is 30% faster than the 2080Ti, then big Navi would need to be over 500mm2 to compete. And that's just in rasterized performance. They will also need to be competitive in Ray Traced performance.

And with all this extra size and features will come extra expense. Considering the 5700T had a small die and has no support for Dx12 Ultimate yet cost $399 on release. And, as you say large dies cost more, so it could easily have an $800 plus price tag.
 
Status
Not open for further replies.
Back
Top Bottom