• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Big Navi is likely to be 60 CU, and RDNA 2, for whatever that is worth...

The 5700XT is 40 (2560 Shaders) so 60 is 3840. 5700XT + 50%, that could fit with <20% greater performance than 2080TI.

The Die size would be <400mm^2, still not big. Half the size of a 2080TI.


No way Xbox is getting that level of GPU.
People hyping up consoles for sake of it.

It'll be lucky to have a 5700 class GPU
 
Think of it this way...

In your mind draw a wave graph, that what that line in the red box is.

(not exacting numbers, just an example)

Over a course of 1000ms (one second) the frame times vary from 2.5ms to 20ms, that's an average of 12ms, or 85 FPS 'ish. but that is quite a large difference from frame to frame, one frame is on screen for 2ms and the next 20ms, you see and feel that difference as micro stutter.

So this is where frame pacing comes in, you can't bring the 20ms down because the GPU can only render the frame as fast as it can, but you can slow down the 2.5ms to 10ms reducing that difference in frame times resulting in less extreme variation between frames, (squash that wave graph) the result is more fluid rendering, on a graph this looks worse because now you're averaging 15ms over 1 second. Which is 65 FPS.

Years ago AMD's early GCN GPU's had higher averages than Nvidia's GPU's, some reviewers picked up on the fact that despite this Nvidia's GPU's were smoother, this is because AMD didn't do any effective Frame Pacing, so while AMD on paper had high frame rates it was due to their frame times fluctuating wildly out of control resulting in stutter.

Although the variance in frame time is bigger, you can always lock it to 60fps on the stronger card and, from my experience, that will always be better than an unlock 40-50fps +/-. If money would not be an issue, than the stronger card, to me, will always seem like a better choice. Of course, 5700xt is not bad at all considering the price difference and should be even better if you compare it to 2080(s) which is also waaaay more expensive.

And there's the RT argument which is kinda subjective as it depends on what everyone values or finds meaningful in their experience. For instance, speaking of Control, I would rather want for Remedy to make a game that in 2019 (and 2020 after patches), would support almost all aspect ratios you throw at it, ergo properly supporting 5760x1080, than RT. To me, playing on a bigger, wider field of view (and add on top the ability to switch the 3rd person view camera from one shoulder to the other), is a much more immersive experience than 1080p, washed out dlss 60fps or native under 60fps one that I'm currently getting with the 2080. Same for Metro, but at least there the aspect ratio is not an issue. So in Metro, although I've played it 1st with RT ON to see how it looks, I find it rather more fun with RT off and proper 60fps...
 
No way Xbox is getting that level of GPU.
People hyping up consoles for sake of it.

It'll be lucky to have a 5700 class GPU

It won't be lucky but a given to have at least a 5700 class gpu considering that the step down from the 5700 is basically the polaris die already in the current gen consoles. Though I also don't believe the consoles to have a huge monster of a gpu in them considering that there is a power budget to adhere to and noise is of most importance i'm sure. If by chance the consoles only got something like the 5600 series worth of performance then i'm pretty sure the lifespan of the next gen consoles would be rather sort considering the small difference between top polaris and a rx 5600. Time will tell i suppose.
 
No way Xbox is getting that level of GPU.
People hyping up consoles for sake of it.

It'll be lucky to have a 5700 class GPU

Don't know but its a big die. Nothing official, people measuring images of it have put it at 407mm^2

A 5700XT is 251mm^2, add 50% to that brings it to 376mm^2, a Zen 2 chiplet is 74mm^2 but this is with 32MB L3, the new Ryzen APU's like the 4800U/H only have 8MB of L3 and that's cut the size of them down significantly.

Whatever it is its way bigger than a 5700XT. about 65% bigger. https://www.reddit.com/r/Amd/comments/elbqhg/xbox_series_x_die_size_math_analysis/
 
I don't think it will be 60 CU either, but the rumours are 12 TF, in which case that's easily doable with either 52-56 CUs, which is very doable within the die size shown & coolable in the giant design of the XSX. In fact I'd say 52 CUs is a very safe call, which at 1800 mhz would yield 12 TF. But even if it's 48 CU @ 1700 mhz, that's still 10.5 TF. Will anyone really cry about above-the-best-5700XT performance in a console? Just think of how insane that amount of graphical power is and how great games already look with a target of a 1.2 TF (which is also in context much lower than what a navi 1.2 TF card would be).

I for one get goosebumps just thinking about the next-gen open world games after they've been so hamstrung this gen but still ended up quite great.
 
They are not clocking a console at 1.8Ghz, that's 5700XT clock speeds, it will be clocked lower than that, around 1650Mhz at most.

Why not? Look at 5700 & 5700 XT models, 1800 mhz is not hard to do at all, and undoubtedly for console chips they'll have better perf/w & density than navi 10 (so easier to cool). And keep in mind, unlike for desktop these can run much hotter because the temp isn't gonna be shown to the user so there's more headroom as well (eg >80 C constantly).

Reference models currently do, at just 2000 RPM
5700 XT 1800 mhz: 78 C
5700 1630 mhz: 66 C

 
Why not? Look at 5700 & 5700 XT models, 1800 mhz is not hard to do at all, and undoubtedly for console chips they'll have better perf/w & density than navi 10 (so easier to cool). And keep in mind, unlike for desktop these can run much hotter because the temp isn't gonna be shown to the user so there's more headroom as well (eg >80 C constantly).

Reference models currently do, at just 2000 RPM
5700 XT 1800 mhz: 78 C
5700 1630 mhz: 66 C


Small form factor, limited cooling, power consumption.
 
I don't think it will be 60 CU either, but the rumours are 12 TF, in which case that's easily doable with either 52-56 CUs, which is very doable within the die size shown & coolable in the giant design of the XSX.

12 TF? What are these rumours based on? If it's just by Phil Spencer saying it's double the GPU performance of the X1, then those rumors hold zero truth as that could mean absolutely anything when you consider all the variables.

Frankly, it's all just marketing talk/nonsense. Phil Spencer knew what he was doing here. It creates discussion of their product and even when it's released people will still believe it's the most powerful box, even if it isn't. It's marketing 101.

I wouldn't believe anything until the consoles are out in the market, tested, dissected and games are fully analysed.

The X1 was apparently the same GPU performance as 1070, but when you contrasted the X1/PC versions of games it was a **** show for the console.

It's all hype and nothing more at this point.
 
Not sure I buy these test results as more than an anomaly/misreading but this is interesting:
Hi, I'm the developer of the benchmark (OpenVR Benchmark), and since I was already asked on my discord server to comment on the AMD benchmark result everyone seems to be talking about, I guess I will also comment here to let people know what I think.

To be clear, I do not have access to any more information than you, I also just see the result in the leaderboard, beating all the 2080 Ti.

The CPU does seem to be a Ryzen 7 4800H and people generally seem to agree about that. Regarding the GPU, it is definitely some kind of unreleased high end GPU, as being 17% above the best performing 2080 Ti is quite a big step. It hasn't really been mentioned here, but that best performing 2080 Ti in the leaderboard is already a really good performing 2080 Ti. As usual, there's many GPUs that perform somewhere in the middle, with few being faster and few being slower. The majority of 2080 Ti actually perform more around ~80 fps in this benchmark, so the unnamed GPU we talk about here is around 29% faster than a regular 2080 Ti, while being 17% faster than the highest overclocked 2080 Ti.

I guess it might be just about possible to get a "30% higher than 2080 Ti" result by having a 2080 Ti with a world-record liquid nitrogen OC, but I think most likely people using engineering sample CPUs, especially a Ryzen 7 4800H, are not running liquid nitrogen OCs.

So this benchmark result is very likely done with some unannounced GPU. And since I would consider it very unlikely that someone has access to both an AMD engineering sample CPU and an unannounced Nvidia GPU, and is allowed to use both in the same system, I think it is most likely that this is an unannounced AMD GPU. So I do agree with most people here that this most likely seems to be a new AMD GPU that is quite a bit faster than a 2080 Ti, but I also think to not be disappointed if it ends up being something different, no one should feel 100% certain about this.
 
AMD have smashed Intel on the cpu side of things, I don't see any reason why they can't upset Nvidia on the gpu front. All I know is it would be nice to have some real competition at the high end and hopefully lower prices as a result. Particularly with Intel entering the gpu space.
 
So you find fine a card just avg 35% faster been 300%-350% more expensive?
That's the whole argument about.

In addition that actually AMD only has a small gap to close beating the Nvidia flagship, not some double the performance card as some believe.


It's there money. They can spend it on what they want.
 
Seems you forgot your own posts mate.


https://forums.overclockers.co.uk/posts/33024660/


(from the same discussion)

Do you want me to continue?
I didn't forget anything and I am right. People are free to spend what they like on their hobby. Calling them a fanboy/shill is the pathetic playground talk I have come to expect from you and you used a quote of mine that backs up what I said. Become a man and stop with the childish words bud.

Edit:

Loving this news and hopefully stands true. I would love to see a AMD card faster than a 2080Ti and by ~30% is no small gain.
 
I didn't forget anything and I am right. People are free to spend what they like on their hobby. Calling them a fanboy/shill is the pathetic playground talk I have come to expect from you and you used a quote of mine that backs up what I said. Become a man and stop with the childish words bud.

Edit:

Loving this news and hopefully stands true. I would love to see a AMD card faster than a 2080Ti and by ~30% is no small gain.
Same, would be fantastic. Though I am not holding my breath :p
 
AMD have smashed Intel on the cpu side of things, I don't see any reason why they can't upset Nvidia on the gpu front. All I know is it would be nice to have some real competition at the high end and hopefully lower prices as a result. Particularly with Intel entering the gpu space.

In what way ?
 
AMD have smashed Intel on the cpu side of things, I don't see any reason why they can't upset Nvidia on the gpu front. All I know is it would be nice to have some real competition at the high end and hopefully lower prices as a result. Particularly with Intel entering the gpu space.

They aren't even close to upsetting Nvidia and are only successful on the CPU side of things at the expense of their GPUs.
 
Status
Not open for further replies.
Back
Top Bottom