• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
They have maintained though that RDNA2 will be available on PC first, so we will see how they do it. I mean for all we know it's going to be launched the same day!
Is it semantics? Release, launch, announce, available to buy?

I think there would be be some leaks showing stock in transit unless it's under 24/7 armed guard :D
 
Didnt RGT state it was also 60% after feedback. :)
I don't know, I can't bring myself to watch that pasty-white guy slur and drool his way through endless repetitions of the same information day after day. If AMD say they're targeting 50% ppw uplift, then I'll treat that as absolute best case, with expectations being lower. If AMD have pulled a Zen and achieved over their target then sweetness and rainbows.
 
I would too if I had them cards tbf. I am on a Vega56 though, so Im not going to upgrade this time 1 year after the card launches as essentially your in between gens or refreshes and seem to miss the performance in some way shape or form.

Yeah for you this purchase makes sense but I have an inkling both AMD and Nvidia by this time next year will have released refreshes of both their Ampere and RDNA2 lineups.

I don't know, I can't bring myself to watch that pasty-white guy slur and drool his way through endless repetitions of the same information day after day. If AMD say they're targeting 50% ppw uplift, then I'll treat that as absolute best case, with expectations being lower. If AMD have pulled a Zen and achieved over their target then sweetness and rainbows.

You think RGT is bad now ? I remember 2+ years ago having to concentrate to understand him as he spoke at super human speeds :p
 
Yeap, :p new GPU before thanksgiving is a possibility that would make sense.

AMD is focusing its priorities on the Zen3 CPU, and its right to do so, CPUs will be in great demand.
Its 16% share hold on the PC GPU is less important to them and I expect that to be the case when cards do start rolling out.
Yes, I think you're right. You fit lot of retail CPU boxes in the equivalent of a retail GPU box.
 
There are limits to how much you can compare to console - some IO and/or hardware features that aren't needed on console will have been removed but will be present on PC silicon, some features are done in software on the console which will be done in hardware on the PC, etc.

EDIT: Also the console architecture is a mix and match of technologies focussed on its task and not the same as rdna 2 on desktop.

That comes out in the wash when you are using transistor counts and density to make estimates.

My point is the estimated die sizes make zero sense to me for their alleged performance targets. N21 @ 500mm^2 makes no sense with 80CUs + 256 bit + cache because what is the point. If you need to make it 500mm^2 anyway you might as well give it a 512 bit bus rather than more cache. If your die size is closer to 330mm^2 without cache though and you want 16GB on the card then it means 512bit and 384bit are out so your only options are HBM or 256bit + bandwidth mitigation. If bandwidth mitigation in the form if Infinity Cache works well, does not balloon the die up above 400mm^2 (Hawaii was 438mm^2 and had a 512 bit bus) and gives you a foundation for MCM GPUs then you take that route but it means the leaked die sizes are totally wrong, which is probable because Series X was leaked to be 405mm^2 and it ended up being 360mm^2.

The other problem with the leaks is that the jump from an 80CU part to a 40CU part is massive. We know it will clock well due to PS5 clocks so perhaps 40CUs + 192 bit bus + cache + huge clocks can hang with the 3070/2080Ti and save some die space but it seems really unlikely to be honest.

To me the following makes the most sense but this is entirely made up by me.

N21 80CU, 256bit, 128MB Infinity Cache and around 380mm^2. This would be 24B transistors and 128MB cache would be around 7B transistors leaving 17B for the rest of the die. Seems doable. 3080/3090 performance.
N22 56CU, 192bit, 96MB Infinity Cache and around 284mm^2. 3070/2080Ti performance.
N23 40CU, 128bit, 64MB Infinity Cache and around 190mm^2. 5700XT/3060 performance.

Those cache numbers are totally made up, if less works then reduce them and save the die area.

Going bigger than those die sizes for those performance targets just does not seem workable to me when AMD are trying to get into the OEM space and they need to show they can keep up with demand. With Zen3, Cezanne and RDNA2 as well as the consoles using up capacity making RDNA2 lean and mean is the way to go.

If the RDNA2 die sizes do end up larger then AMD is going to make the bare minimum it can get away with because Zen3 and Cezanne will be so much more profitable. A 500mm^2 RDNA2 GPU that is a bit faster than the 3080 is going to have a ceiling price of $700. If AMD have to use 500mm^2 of 7nm capacity would they rather make an N21 die or would they rather make 7 zen3 dies? 7 Zen3 dies is almost a full 64c EPYC which will sell for >$4000. When you factor in the board costs for the GPU as well it probably works out that manufacturing 1 64c EPYC part is about the same price as 1 N21 GPU. Would you rather sell the $4,000 + EPYC or the $700 RDNA2 part for the same cost of goods sold?

Making stuff up. I suggest you go back to your source. I've seen this lie inflated to a 2080 Ti.

A 5700 XT is 2560 shaders with a default real world clock of around the 1800 to 1900mhz range.


That was a 4 week port to show off Ray Tracing. It would not have had any optimisation work done by the devs and I doubt the driver team MS/AMD side would have done any work on it at that point either. This is a lower bound of performance and the 2080S is not even 10% faster than the 2080 anyway. 2080Ti is a step to far but in titles that are optimised equally I expect Series X to be around 2080S performance.
 
The PC version was running on its Ultra settings and XSX demo (which was thrown together in 2 weeks) ran the same settings with additional effects and outperformed the PC. I suggest you go watch the event if it's still available before spouting off.

The devs ran the benchmark on PC and on Series X. This was 4k Ultra and was an apples to apples comparison (well as apples to apples as console vs pc can get). According to DF who reported it the Series X was similar to the 2080. Considering the fact it was a quick port and the devs had just thrown in some RT features to show off the technology it seems like it would be fair to call this a lower bound result. The devs would not have done any optimisation for RDNA2 and MS/AMD probably did not have the final driver the console would use.
 
Yeah for you this purchase makes sense but I have an inkling both AMD and Nvidia by this time next year will have released refreshes of both their Ampere and RDNA2 lineups.

Very true and the older me would have done this. But after my vega purchase for £299 seemed a bargain, it was followed by the VII and the 5700XT which although not a million miles better, they are better for 4k/1440p or should handle it better. I didnt know at the time I was going to have a 4k capable display, but what I took from it was waiting for the settled prices and refreshes etc. is going to fall into the category classic of 'but if you wait then you can get..'. I just want to be able to play 4k reasonable and dont mind dropping to 1440p for situations where I need to.
 
The devs ran the benchmark on PC and on Series X. This was 4k Ultra and was an apples to apples comparison (well as apples to apples as console vs pc can get). According to DF who reported it the Series X was similar to the 2080. Considering the fact it was a quick port and the devs had just thrown in some RT features to show off the technology it seems like it would be fair to call this a lower bound result. The devs would not have done any optimisation for RDNA2 and MS/AMD probably did not have the final driver the console would use.
Tell Dontrocktheboat, he's the one dismissing the demo :P
 
Well that's just completely wrong. Did you just make that up?

AMD's launch slides showed the 5700XT beating the 2070. And it did. But then the Supers came out a week later, so any 2070s you've seen beating the 5700XT would've been the Super, and that card didn't exist when AMD unveiled Navi 10.

no techpowerup review shows regular old 2070 getting the wins.
 
odd that, the individual tests in some games show the 2070 to be on top. power consumption is a lot higher than the 2070 too.
but as you say if it matches a 2070 shader for shader, how will it match a 3080 when rdna2 seems to be 51xx shaders vs 8000+ for the 3080?
 
odd that, the individual tests in some games show the 2070 to be on top. power consumption is a lot higher than the 2070 too.
but as you say if it matches a 2070 shader for shader, how will it match a 3080 when rdna2 seems to be 51xx shaders vs 8000+ for the 3080?

Ampere does not match Turing shader for shader. Ampere has an FPS/Tflop regression.
 
Status
Not open for further replies.
Back
Top Bottom