• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
I suspect that the Nvidia killer will be a low to midrange card and they'll try to take Nvidia's market share from the bottom up. Small efficient die and undercut Nvidia's pricing. The 80 CU part will command as high a price as they can get away with. I may be wrong of course ;)
you might be right about that, indeed its more plausible but largely a waste of time because lets say you have a top end AMD and Nvidia part slugging it out with a small price difference people will just buy the Nvidia part. AMD are on crack if they think that will have any disruption in the market just like the R9 Nano and Radeon 7. Not in themselves bad GPUs just simply irrelevant because of their price/performance/release date
 
The bigger the die the worst the yields though. The issue,is you only have to look at the GA100 - it is severely cut down,and has increased TDP by a significant amount. That indicates to me around 800MM2 is starting to push things a lot,even with TSMC 7NM being out since 2018.

but that doesn't change what me and @TheRealDeal have said. Yield problems, Wafers etc all part and parcel of the GPU industry. Large Die's cost money. But, AMD won't be competing with the 3080Ti with a small die.
 
If the architecture is good enough then yes, it is entirely possible.

It would have to be a miraculous architecture to make up that kind of performance difference without a node shrink and still maintain a die size of under 300mm2.

One look at the history of GPUs should show you how extremely unlikely that is to happen.
 
That was the supposed strategy they were trying with Polaris.

I don't think anyone is expecting RDNA2 to be like Polaris. I keep saying it but the console R&D is being underestimated. AMD have been working with both Microsoft and Sony for years now, it will give them an advantage.
 
Yea I agree with all that you have said. Nvidia will most likely push the limits though to make sure they are still the fastest. I dont think Amd will want to go much higher than 500mm2 where as Nvidia may push 600 mm2 for gaming parts.

What I do think is AMD will be way closer to Nvidia than they have been in many years with RDNA 2. Just a shame the pound to dollar is still rubbish so prices will still be high.

I think that is where it will be personally. Maybe Nvidia could try a larger GPU,but I wonder if it would be a limited release then.


but that doesn't change what me and @TheRealDeal have said. Yield problems, Wafers etc all part and parcel of the GPU industry. Large Die's cost money. But, AMD won't be competing with the 3080Ti with a small die.

505MM2 isn't a "small die" which is what the "big Navi" rumours keep saying,and would make it the largest AMD GPU ever made(Vega10 according to TPU is 495MM2). This places it between GP102 and GK110/GM200 in die area.

I think you need to ignore Turing,as it was a large die made on oldish process,with further changed technical specifications,which allowed for large chips. If you look at the last 10 years of Nvidia "large GPUs" on a new node,they have been between 470MM2~600MM2. The problem with larger dies is if you need to start disabling large sections of the GPU to keep yields and power consumption in check.

For example the GA100 is just over 800MM2 in area. It has 8192 shaders,but the A100 is launching with under 7000 shaders,and a TDP of 400W,instead of it's predecessor being 300W:
https://videocardz.com/press-release/nvidia-announces-ampere-ga100-gpu

The question,is not if the top gaming Ampere GPU is larger,but how much larger it will be. If it's 650MM2 ,and has similar yields and defects rates to the 500MM2 AMD GPU,then it might be measurably faster. But what happen's if that 650MM2 GPU has far greater defect rates,etc and Nvidia will need to cut it down more,etc?? I give you an example of this - Fermi. The GF100 GPU had to be cut down and it wasn't clocked as high as it should,as it was difficult to manufacture. ATI Cypress was 63% of the die area of the GF100,but the GTX480 was only between 5% to 10% faster overall despite more memory bandwidth and VRAM:
https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/32.html

Now look at the fixed GF110,when yields,etc got better. In the end most of these things will be determined by uarch effiency. We found with AMD they started to loose scaling due to various reasons,so we need to see how well Ampere and RDNA2 scales as well. Then we need to see how well features are supported by games. Nvidia is traditionally stronger at this,but then RDNA2 is in consoles,so should help AMD unlike Vega,which was neither here nor there.
 
Just can’t see AMD taking the performance crown. Sure they may seriously close the gap, but they won’t be able to beat the new upcoming RTX Titan imo. Not that I care about that, I just want price for performance to improve as much as possible.
 
An 80CU RDNA2 part (Nvidia killer) should be less than 600mm2 and could cause Nvidia a shed ton of problems if it indeed exists, can be mass produced and is priced well below £1k. The top end AMD part doesn't have to out perform everything Nvidia has it just needs to get close and cost significantly less such that the Nvidia part(s) make no sense e.g. an extra £500 gets you <10% gain.

It may sound silly but I think AMD can up their margins on Zen3 and offset some of that with GPUs, but can they sell that argument to the CEO and shareholders?
Exactly!
The 80 CU rumor is starting to sound real. However, there is also a 90+ CU and a 120CU skus rumored.
However, I've only heard the 80 CU part referred as just Big Navi though. Whatever this nvidia killer is not much else has been said other then it's suppose to be the Navi23.


I suspect that the Nvidia killer will be a low to midrange card and they'll try to take Nvidia's market share from the bottom up. Small efficient die and undercut Nvidia's pricing. The 80 CU part will command as high a price as they can get away with. I may be wrong of course ;)
Very plausible. It does appear that AMD does have a market strategy for Nvidia just as they have with Intel. It will be interesting to see how all this unfolds.
 
From AMD's latest marketing video.... where have i seen this design before? is this going to be AMD's new reference design?

BDNfQcC.png
 
I don't think anyone is expecting RDNA2 to be like Polaris. I keep saying it but the console R&D is being underestimated. AMD have been working with both Microsoft and Sony for years now, it will give them an advantage.
Not only that RDNA is scalable down enough for phones with their partnership with Samsung:
https://tech4gamers.com/samsung-exynos-soc-with-amd-rdna-gpu-is-up-to-3x-times-powerful/
https://www.tomshardware.com/news/amd-rdna-exynos-samsung-soc-smartphone-graphics

Which is why I believe AMD is starting smaller on GPU then scale it up (Big Navi for example). It appears, to me, that with RDNA they start small and work they way up with a set goal in mind on gpus per wafer each time the die size increases. How large exactly will those gpu's be is anyone guess.

RDNA1 still has GCN arch. to it from what I've read:
There are still some great parts to the ol’ GCN design – it’s a fantastic compute engine, for one – so throwing the silicon baby out with the bathwater seems a bit counter-intuitive. Therefore AMD has mostly created this first iteration of the RDNA architecture using GCN building blocks. But the main thing to note is that this is a pseudo-hybrid of the Graphics Core Next design that has been reworked specifically with gaming as its main focus.
https://www.pcgamesn.com/amd/navi-rdna-architecture-release-date-specs-performance

Looking to the future, the challenge for the next era of graphics is to shift away from the conventional graphics pipeline and its limitations to a compute-first world where the only limit on visual effects is the imagination of developers. To meet the challenges of modern graphics, AMD’s RDNA architecture is a scalar architecture, designed from the ground up for efficient and flexible computing, that can scale across a variety of gaming platforms. The 7nm “Navi” family of GPUs is the first instantiation of the RDNA architecture and includes the Radeon RX 5700 series. RDNA Architecture | 4RDNA Architecture Overview and Philosophy The new RDNA architecture is optimized for efficiency and programmability while offering backwards compatibility with the GCN architecture. It still uses the same seven basic instruction types: scalar compute, scalar memory, vector compute, vector memory, branches, export, and messages. However, the new architecture fundamentally reorganizes the data flow within the processor, boosting performance and improving efficiency.
https://www.amd.com/system/files/documents/rdna-whitepaper.pdf

So when RDNA 2x is released we won't see any GCN arch. (that's also the rumor). Which would factor in additional efficiencies. So I would assume that if you have a RDNA 1x: 5700 XT vs RDNA 2x: 5700XT the later should be notably faster using less power. But we will see.

I also recall the interview with David Wong about multi-chip gpus back before navi, RDNA 1x was released.
“We haven’t mentioned any multi GPU designs on a single ASIC, like Epyc, but the capability is possible with Infinity Fabric.”

While there was nothing definite, the suggestion was that for future GPUs Infinity Fabric would become more of a key component in the design, and would be there to ensure multiple slices of graphics silicon could communicate quickly and efficiently across a single package. Because of that the tech world expected Navi might come with some interesting new multi-GPU layouts.

And Infinity Fabric does indeed seem to be the perfect interconnect, unfortunately just because you can plug a bunch of GPUs together that doesn’t mean you should. The problem is that, especially in the gaming world, the software isn’t there to make such a discrete graphics card design worthwhile. For CPUs the infrastructure is already there, baked into the OS, to allow for multiple chips to function invisibly, such as with Ryzen’s discrete CCX design.
...
“To some extent you’re talking about doing CrossFire on a single package,” says Wang. “The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance.

“We’re going down that path on the CPU side, and I think on the GPU we’re always looking at new ideas. But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features… The multithreaded CPU is a bit easier to scale the workload. The NUMA is part of the OS support so it’s much easier to handle this multi-die thing relative to the graphics type of workload.”
...
“That’s gaming” AMD’s Scott Herkelman tells us. “In professional and Instinct workloads multi-GPU is considerably different, we are all in on that side. Even in blockchain applications we are all in on multi-GPU. Gaming on the other hand has to be enabled by the ISVs. And ISVs see it as a tremendous burden.”
https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1

Now this was said in reference to RDNA1. David and Scott points out why it wouldn't be feasible at the time. However, on the professional side is possible we might see GPU MCM 1st at some point.
 
I keep saying it but the console R&D is being underestimated. AMD have been working with both Microsoft and Sony for years now, it will give them an advantage.

The console R&D isn't been underestimated. It's been totally overestimated by people who don't know any better. The same thing was said before the xbox one X and the xbox one release, but it made no difference at all.

The Truth is there will be very little, if any, advantage for AMD. The Microsoft Xbox Series X, for the first time in a console, is using the exact same version of Directx 12 Ultimate that's on the PC.
 

Your post basically is a very long way of saying what I said. AMD need a large die to compete. I reckoned it would be 500mm2 plus, and the first line of your post agrees with. And 500mm2 on 7nm is a very big die size, like I said earlier, for reference the 2080ti would be 471mm2 on a 7nm process.

It's going to be expensive. That's the point, it's not going to be small cheap die like Eastcoasthandle suggests. And if it is a small cheap die, then it won't compete with the 3080ti in any way shape or form.

So, if you are going to reply, just answer the question. Do you think AMD will be able to compete with the 3080Ti with a small cheap die, yes or no?
 
Interesting times, end of this year early next could be good for the enthusiast.

Yes, it will be very interesting times. I am most curious about RDNA 2. How is it going to handle Ray Tracing? How much of a performance increase it will actually be etc. I prefer to buy AMD whenever possible and would like to see them return to been competitive at the high end.
 
Your post basically is a very long way of saying what I said. AMD need a large die to compete. I reckoned it would be 500mm2 plus, and the first line of your post agrees with. And 500mm2 on 7nm is a very big die size, like I said earlier, for reference the 2080ti would be 471mm2 on a 7nm process.

It's going to be expensive. That's the point, it's not going to be small cheap die like Eastcoasthandle suggests. And if it is a small cheap die, then it won't compete with the 3080ti in any way shape or form.

So, if you are going to reply, just answer the question. Do you think AMD will be able to compete with the 3080Ti with a small cheap die, yes or no?
No. Simples really :p

Hell, I would be super happy if they even manage it on a large die!

What I do think is possible and would like to see is these smaller die’s bringing around 2080Ti performance for around £400. Remember, these are new architectures, so there will be performance gain there also, so it is very possible.
 
Large dies are bad, really bad.

Vendors need to get back to average size dies and average size prices for their top end cards.

Turing is the worst GPU architecture ever for trying to cram unready features onto huge dies for very little end user benefit.

Unfortunately for Ampere Turing has boxed it into a corner, does it go for reasonable prices and very little performance increase or very high prices and a 50% increase.

IMHO Ray Tracing won't become everyday until the generation after Ampere.
 
Large dies are bad, really bad.

Vendors need to get back to average size dies and average size prices for their top end cards.

Turing is the worst GPU architecture ever for trying to cram unready features onto huge dies for very little end user benefit.

Unfortunately for Ampere Turing has boxed it into a corner, does it go for reasonable prices and very little performance increase or very high prices and a 50% increase.

IMHO Ray Tracing won't become everyday until the generation after Ampere.

I think there's a very good chance you're right.
 
Status
Not open for further replies.
Back
Top Bottom