• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
I think this guy is looking at AIB partner cards only and assuming those will be the best full fat 80 CU SKU's, this is what Igore did. Turns out AMD are keeping the full fat 80 CU SKU for themselves..

I can imagine a cut down 72 CU 6900XT at 2.1Ghz AIB card being a match for the 3080 with AMD's exclusive 80 CU at 2.2Ghz (6900XTX?) pushing it a little past the 3080.

10% more CU's than a 3080 equivalent is probably very close to a 3090...
 
I think this guy is looking at AIB partner cards only and assuming those will be the best full fat 80 CU SKU's, this is what Igore did. Turns out AMD are keeping the full fat 80 CU SKU for themselves..

I can imagine a cut down 72 CU 6900XT at 2.1Ghz AIB card being a match for the 3080 with AMD's exclusive 80 CU at 2.2Ghz pushing it a little past the 3080.

If going by Navi 10, the 5700XT is ~20% faster in games than 5700, 5700 has 10% less CUs and 2/300MHz less. 72CUs is 10% less than 80CUs, the full AMD 80CU will either have equal or higher clocks than the 72CU version. So I'd put the 6900XT maybe 15% faster than the 6800XT.
As for your clocks, there's AIBs of the 6800XT boosting to 2.55GHz already on early drivers, after refinement they'll be hitting 2.6GHz, the driver Igor got hold of showed 2.8GHz was the driver max, there's plenty headroom still for higher clocks.
 
Id like to know how anyone knows the performance of the 6900 when only AMD have this card and no AIBs? lol... a lot of leaks are just speculation and rubbish, as i much as i detest his voice, Paul from RGT seems to have somewhat a better hit than miss ration on info
Quite correct and only AMD know what is what at present. I do love a 'leak' though and it gives good talking points and our own speculation is always good reading regardless of how close or far.

Hopefully AMD have decent drivers and the card is on par with the 3080 at least. Price is important of course but I feel the days of a £400 GPU that is the fastest out are long gone.
 
With 256bit bus? probably not.

photo_verybig_185130.jpg
 
I think this guy is looking at AIB partner cards only and assuming those will be the best full fat 80 CU SKU's, this is what Igore did. Turns out AMD are keeping the full fat 80 CU SKU for themselves..

I can imagine a cut down 72 CU 6900XT at 2.1Ghz AIB card being a match for the 3080 with AMD's exclusive 80 CU at 2.2Ghz pushing it a little past the 3080.

I feel there's no 72cu (pure speculation).. I looked at TSMC defect stats and a chip that size should be yielding 70% error free after factoring edge defects.. AMD purposely cutting dies like that doesn't make sense IMO, maybe the 6900 is just a golden sample

Just a theory, we will know soon..

Also, at the same time the Chinese opinion is at odds with clock speed leaks and power targets we have been hearing (I have personally discounted them :))
 
If going by Navi 10, the 5700XT is ~20% faster in games than 5700, 5700 has 10% less CUs and 2/300MHz less. 72CUs is 10% less than 80CUs, the full AMD 80CU will either have equal or higher clocks than the 72CU version. So I'd put the 6900XT maybe 15% faster than the 6800XT.
As for your clocks, there's AIBs of the 6800XT boosting to 2.55GHz already on early drivers, after refinement they'll be hitting 2.6GHz, the driver Igor got hold of showed 2.8GHz was the driver max, there's plenty headroom still for higher clocks.

I doubt they will hold those high clocks, Navi 10 varies as much as 200Mhz from game to game, even as much at 100Mhz in the same game depending on what its doing, i'm going from averages because i know how RDNA's boost algorithm works, 2.4Ghz is anything from 2.1Ghz to 2.4Ghz and on average closer to 2.1Ghz than 2.4Ghz. :)
 
I doubt they will hold those high clocks, Navi 10 varies as much as 200Mhz from game to game, even as much at 100Mhz in the same game depending on what its doing, i'm going from averages because i know how RDNA's boost algorithm works, 2.4Ghz is anything from 2.1Ghz to 2.4Ghz and on average closer to 2.1Ghz than 2.4Ghz. :)

From Patrick Schur on Twitter:
Navi 21 XT (ASUS Strix) - Engineering Board [3DMark11] System 1:
  • Avg: 2291 MHz
  • Median: 2373 MHz
  • Max: 2556 MHz
Time spent at each clock:
  • ≥ 2500 MHz (10.28 %)
  • 2400 ≤ x < 2500 MHz (24.46 %)
  • 2300 ≤ x < 2400 MHz (50.49 %)
  • 2200 ≤ x < 2300 MHz (3.64 %)
  • 2100 ≤ x < 2200 MHz (1.38 %)
  • < 2100 MHz (9.75 %)
So it spent 84% of it's time above 2300MHz.
 
From Patrick Schur on Twitter:
Navi 21 XT (ASUS Strix) - Engineering Board [3DMark11] System 1:
  • Avg: 2291 MHz
  • Median: 2373 MHz
  • Max: 2556 MHz
Time spent at each clock:
  • ≥ 2500 MHz (10.28 %)
  • 2400 ≤ x < 2500 MHz (24.46 %)
  • 2300 ≤ x < 2400 MHz (50.49 %)
  • 2200 ≤ x < 2300 MHz (3.64 %)
  • 2100 ≤ x < 2200 MHz (1.38 %)
  • < 2100 MHz (9.75 %)
So it spent 84% of it's time above 2300MHz.

3dmark11 :eek::eek::eek::eek::rolleyes::rolleyes::rolleyes::rolleyes::D:D:D:D

run that puppy on a benchmark designed for at least this decade. Results will be very different.

In other news, Navi boost up to 2600mhz running 3dmark99
 
I feel there's no 72cu (pure speculation).. I looked at TSMC defect stats and a chip that size should be yielding 70% error free after factoring edge defects.. AMD purposely cutting dies like that doesn't make sense IMO, maybe the 6900 is just a golden sample

Just a theory, we will know soon..

Also, at the same time the Chinese opinion is at odds with clock speed leaks and power targets we have been hearing (I have personally discounted them :))

The rumour was the 72 CU parts would be the best AIB's would get, while AMD keep the 80 CU chips for themselves BUT they would be sold in low quantities due to yields, you maximize your yields by binning like this, both the Series X and PS5 have 2 CU's disabled to maximize yields, but AMD will still be getting some good 80 CU dies out of their yields and it doesn't make sense cutting them down if you can sell them in a higher SKU, even if its low quantities.

I don't get that, if the majority of dies you're producing are full fat why would you skim them? you would only do that if you have fixed function hardware, like the consoles which are set to 52 CU's using the series X as an example, which is a 56 CU die but to get the maximum number of dies out of your yields you need the ones that didn't make it to 56 CU's.

If what you're selling is GPU's with different CU counts across the range you would just use them all anyway.
 
From Patrick Schur on Twitter:
Navi 21 XT (ASUS Strix) - Engineering Board [3DMark11] System 1:
  • Avg: 2291 MHz
  • Median: 2373 MHz
  • Max: 2556 MHz
Time spent at each clock:
  • ≥ 2500 MHz (10.28 %)
  • 2400 ≤ x < 2500 MHz (24.46 %)
  • 2300 ≤ x < 2400 MHz (50.49 %)
  • 2200 ≤ x < 2300 MHz (3.64 %)
  • 2100 ≤ x < 2200 MHz (1.38 %)
  • < 2100 MHz (9.75 %)
So it spent 84% of it's time above 2300MHz.

The question is what is the power consumption at an average clock speed of 2.3Ghz vs 2.1Ghz?, if one is 250 Watts while the other 290 watts does that make the latter too high?
 
I had all versions on PS4 from launch and both versions of Pro.
I’m not talking about 4-5 heavily polished exclusives they had during 8 years.

you go play AAA third party multiplatform games like Witcher,Red Dead,Far Cry,Assassins,Battlefield,etc etc on PS4 Pro and compare them to a decent PC experience.

the PS4 Pro at 30 fps and bellow feels so sluggish and slow with detail missing make it unenjoyable.

You simply can’t go back from high frame rate ,high detail to even 60fps let alone 25-30 fps on PS4 Pro or Xbox.

Dude. I don't disagree that a pc offers a smoother and higher fidelity experience. My entire point that people seem to overlook is the fact that a pc costs considerably more. People will probably chip in and say but "oh i can get a gtx 1080 for £180 and a ryzen 3600 for bla bla bla bla" to try and imply pc gaming is so cheap and detract from simple fact that a console isnt trying to offer the same experience as £1500 worth of hardware. It isnt.

Does a pc beat a console. Yes

Is the hardware cost significantly different. Yes.

Is a mclaren 570s faster than my ford mondeo. Yes.

Its like I just wanna say "well Derrrrr" every time some twonk tries to tell us how much better a pc runs a game.
 
Id like to know how anyone knows the performance of the 6900 when only AMD have this card and no AIBs? lol... a lot of leaks are just speculation and rubbish, as i much as i detest his voice, Paul from RGT seems to have somewhat a better hit than miss ration on info

They're usually found when they upload benchmark scores, people tend to notice code names of newly entered bench results and build up speculation based on that. information from the inside probably fuels it also
 
Status
Not open for further replies.
Back
Top Bottom