• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
And even if you go conservative with what the actual performance increase would be, a 40 CU card at 225W would edge out a 2080 Ti, and therefore would be right there with the 3070. So yes, it is entirely possible AMD can take on the 3070 with only 40 CUs.

And with 3070 likely to be 2080ti perf. cannot wait to see a price war! Although it's more likely to be an availability war:P
 
Going by shader counts big navi may end up competing with the 3070. Its not known for amd cards to match nvidias shader for shader.
Hopefully and have something that beats the 3080 in everything. That means price wars!!!

RDNA1 competes with Nvidia "Shader for shader" 5700XT 2560 Shaders at 1.9Ghz about = to 2070 Super with 2560 shaders at 1.9Ghz.

RDNA2 is literally twice that with 12% higher clocks. +112% GPU without an IPC gain, if the 3700 is ~2080TI its 35% faster than a 2070 Super / 5700XT.
 
Which makes it even more conceivable that 40 RDNA 2 CUs can take on the 3070

You need a ~35% increase from the 5700XT to get there (2080ti numbers) and even 7nm w/ EUV is 10-15% frequency up lift at best. Certainly not possible at 225W with conservative numbers - max numbers maybe. Frequency boosting alone is unlikely to bring a 10-15% in game performance increase either.
 
Thats even more mysterious than the performance.

Its being "announced" on 28 Oct.
Release is probably around mid Nov.
When will you be able to buy it and own it.

Your guess is as good as mine.
Could be a similar timeline to the CPU launch which is four weeks after the reveal?
Which actually ties in with an earlier rumour of "late November".
 
Could be a similar timeline to the CPU launch which is four weeks after the reveal?
Which actually ties in with an earlier rumour of "late November".

yes it could be, that was my first thought.

Nov 28 for purchase date.

We could be well into December before we even get a sniff at the new card or anything else for that matter.

Nov 5 ryzen
Nov 10 Xbox
Nov 12 Playstation
Nov 28?? Big Navi
 
You need a ~35% increase from the 5700XT to get there and even 7nm w/ EUV is 10-15% frequency up lift at best. Certainly not possible at 225W with conservative numbers - max numbers maybe. Frequency boosting alone is unlikely to bring a 10-15% in game performance increase either.

The XBox Series X is = 2080 Super with 3328 shaders at 1.825Ghz and 150 Watts.

My guess is a 40 CU RDNA2 is around the same performance and around the same power consumption at 2.1Ghz.

I doubt the 3700 is any faster than that, Nvidia said the 3800 would be 2X as fast as the 2080 and it isn't, not even close.
 
The XBox Series X is = 2080 Super with 3328 shaders at 1.825Ghz and 150 Watts.
.

Absolute drivel.

Last time I checked Xbox Series X cant run anything on windows, so it cant be compared to 2080 Super.

Show me Port royal run on Xbox series X. Yeah end of chat.

These are just blind estimates of performance across different platforms, different coded games with different levels of optimisation, that don't mean jack.
 
You need a ~35% increase from the 5700XT to get there (2080ti numbers) and even 7nm w/ EUV is 10-15% frequency up lift at best. Certainly not possible at 225W with conservative numbers - max numbers maybe. Frequency boosting alone is unlikely to bring a 10-15% in game performance increase either.

None of the leaks make sense.

A 56CU@2Ghz part with 64Rops and a 256bit bus would be at most 300mm² and would beat a 2080Ti quite comfortably. If they built it at Renoir density it would be ~205mm² which would explain a 192 bit bus for 56CUs and would explain Infinity Cache as a solution to the problem of small dies below N21.

So a 230mm² die with 56CUs, 64 ROPs, 192bit bus with some amount of cache (64MB at 1mm²/MB approx) and 12GB ram would compete very well with the 2080Ti/3070 and we know it would be sub 200W because the 52CU 1.825Ghz series X GPU with 16GB ram and a 320 bit bus uses at most 150W. This would be a perfect N22 die.

Small and fast so it can be sold at a margin closer to that of Zen3. It would still be a $500-550 part at most and it probably costs more to make than the 5950X or the 5900X so it is still less profitable than the CPUs. Going any bigger for this kind of performance would mean less profit per unit and fewer wafers for zen3 so less margin overall.
 
I went off the boil totally after the crafty Nvidia Launch but I'm genuinely intrigued and a little excited to see what AMD present.
The recent steam surveys made it obvious that Nvidia have the market pretty much completely in mid tier to bleeding edge GPU's and, in a mindset that benefits me as a consumer, it would be straight forward for AMD to make a dent in that monopoly:

  1. Provide equal (or possibly even marginally less than) performance tier for tier
  2. Undercut by a margin that makes the AMD equivalent GPU attractive to buy
  3. Provide reasonable stock levels at launch and don't artificially drive prices up /alienate a portion of your target market by allowing the bots and scalpers to fill their boots.
 
None of the leaks make sense.

A 56CU@2Ghz part with 64Rops and a 256bit bus would be at most 300mm² and would beat a 2080Ti quite comfortably. If they built it at Renoir density it would be ~205mm² which would explain a 192 bit bus for 56CUs and would explain Infinity Cache as a solution to the problem of small dies below N21.

So a 230mm² die with 56CUs, 64 ROPs, 192bit bus with some amount of cache (64MB at 1mm²/MB approx) and 12GB ram would compete very well with the 2080Ti/3070 and we know it would be sub 200W because the 52CU 1.825Ghz series X GPU with 16GB ram and a 320 bit bus uses at most 150W. This would be a perfect N22 die.

Small and fast so it can be sold at a margin closer to that of Zen3. It would still be a $500-550 part at most and it probably costs more to make than the 5950X or the 5900X so it is still less profitable than the CPUs. Going any bigger for this kind of performance would mean less profit per unit and fewer wafers for zen3 so less margin overall.

There are limits to how much you can compare to console - some IO and/or hardware features that aren't needed on console will have been removed but will be present on PC silicon, some features are done in software on the console which will be done in hardware on the PC, etc.

EDIT: Also the console architecture is a mix and match of technologies focussed on its task and not the same as rdna 2 on desktop.
 
yes it could be, that was my first thought.

Nov 28 for purchase date.

We could be well into December before we even get a sniff at the new card or anything else for that matter.

Nov 5 ryzen
Nov 10 Xbox
Nov 12 Playstation
Nov 28?? Big Navi

They've said they were releasing before the next gen consoles, with Nvidia's shortages and cyberpunk coming out at basically the same time I think they'll be releasing it earlier than that.
 
I can see this launch going exactly like the 3000 series launch -

-6900XT available for purchase.
-5 minutes later it's sold out due to an army of bots and scalpers.
-Retailers get more stock but start charging £100 extra because "reasons".

This year is terrible for GPU's, Great for the greedy though.
 
They've said they were releasing before the next gen consoles, with Nvidia's shortages and cyberpunk coming out at basically the same time I think they'll be releasing it earlier than that.
They are "releasing" before the consoles because the "reveal" is on October 28th. When they are available for purchase is up for debate.
 
The XBox Series X is = 2080 Super with 3328 shaders at 1.825Ghz and 150 Watts.

My guess is a 40 CU RDNA2 is around the same performance and around the same power consumption at 2.1Ghz.

I doubt the 3700 is any faster than that, Nvidia said the 3800 would be 2X as fast as the 2080 and it isn't, not even close.

Making stuff up. I suggest you go back to your source. I've seen this lie inflated to a 2080 Ti.

A 5700 XT is 2560 shaders with a default real world clock of around the 1800 to 1900mhz range.

9h3Rm4J.jpg

 
Status
Not open for further replies.
Back
Top Bottom