• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Very pessimistic from you, I didn't expect that!

"IF for GPU-to-GPU is not being looked at very heavily"
https://www.reddit.com/r/AMD_Stock/comments/ehmg4c/an_interview_with_amds_cto_mark_papermaster/

He is talking about ZEN CPUs. A Zen 2 chip is 74mm2 at 7nm not 500mm2!!! And AMD would struggle at 5nm if it doesn't make multiple quad cores cutting the size of the chip further because TSMC has already stated 32% yields at 100mm2 chip and 80-90% at 17.5mm2.
So if AMD needs to maintain same profit and prices has to cut the chip down.

GPUs forget about them at 5nm on their current sizes. Need to be reduced to multi chip otherwise the yield of a 500mm 5nm GPU would be 5%. So maybe 2 GPUs per whole waffer!!!!!!!
 
He is talking about ZEN CPUs. A Zen 2 chip is 74mm2 at 7nm not 500mm2!!! And AMD would struggle at 5nm if it doesn't make multiple quad cores cutting the size of the chip further because TSMC has already stated 32% yields at 100mm2 chip and 80-90% at 17.5mm2.
So if AMD needs to maintain same profit and prices has to cut the chip down.

GPUs forget about them at 5nm on their current sizes. Need to be reduced to multi chip otherwise the yield of a 500mm 5nm GPU would be 5%. So maybe 2 GPUs per whole waffer!!!!!!!
Wasting your time. He will not believe you and just Google and find something else to back what he is saying. Lol.
 
He said that they won't make chiplets GPUs because of severe issues with the coordination between them.

Em where did he say that in that interview?

IC: In the market today we have dual socket Rome using IF for socket-to-socket communications, and we’ve been told that the GPU line will enable GPU-to-GPU infinity fabric links at some point. Is there a technical reason we don’t see CPU-to-GPU infinity fabric connectivity right now?

MP: So we deploy IF in both CPU and GPU, and it enables us to scale very effectively. We leverage across CPU and GPU today, and it allows us to use elements of optimization that we can do using the protocol. We continue to look at where we can leverage that benefit, and where having an AMD CPU connected via IF to an AMD GPU makes sense

This is the only GPU based content on the page, oh and a little about memory stuff too. Either way we have no idea what can be done in 3 years from now when 5nm might be usable. To the same extent I also wouldn't expect the yield ratio to remain as low as it is now since it is at risk production. Previous risk production to release saw around a 50% increase in yield so you could be looking at 45%+ yield rate at 100nm in 12 months.
 
Em where did he say that in that interview?

Why does he have to say it in that interview?

"Papermaster, iirc, spoke about this.

AMD's got the tech to do chiplet GPUs, they just don't know how to obfuscate/abstract them in such a way as to make them transparent to developers. As of right now, if they did it, it'd be like a more compact version of the 2Fast2FuryX. Basically, devs would be dealing with Crossfire just on the card instead of across a PCI-E port, and they don't wanna."
https://www.reddit.com/r/Amd/comments/b04506/could_chiplets_be_implemented_in_gpus/

Common sense - CrossFire is not being used today for anything and anywhere. If they had the intention to use chiplets, they could at least try to get proper CrossFire support with all GPUs.
 
Why does he have to say it in that interview?

"Papermaster, iirc, spoke about this.

AMD's got the tech to do chiplet GPUs, they just don't know how to obfuscate/abstract them in such a way as to make them transparent to developers. As of right now, if they did it, it'd be like a more compact version of the 2Fast2FuryX. Basically, devs would be dealing with Crossfire just on the card instead of across a PCI-E port, and they don't wanna."
https://www.reddit.com/r/Amd/comments/b04506/could_chiplets_be_implemented_in_gpus/

Common sense - CrossFire is not being used today for anything and anywhere. If they had the intention to use chiplets, they could at least try to get proper CrossFire support with all GPUs.

The link and what we was discussing is all about CPU from that though, If you was just trying to say about it generally don't go and quote back to a discussion with link about CPU details :/

And further to that, yes they can build them and yes it means basically working out how to call to multi-gpu and the API's software and games run on doesn't do that well. Just like they haven't for ages in terms of CPU even after having them. Doesn't mean that isn't the solution or not possible.

Common sense is that Xfire doesn't do well because it requires two complete GPU's to be brought, the cooling, the case, the motherboard all to support it which is not the same as a single GPU utilising chiplets at all in actual terms of what the end user needs to worry about and that is a massive difference.
 
Why not both? I am getting a PS5 for sure :D

My combo will be PS5 & Switch consoles and PC with a RTX 3070 or whatever £400-500 buys as they may change naming again. As long as that money buys more than 2080Ti performance then that should be fine.

I will be set with that for a very long time I would imagine.
Oh I agree, PC (with powerful gfx card), Switch and PS5 will be my setup. With a One X to compliment when I CBA with the PC (already have a One X).

I just want a reason not to pack the PC in for a Series X.
 
The link and what we was discussing is all about CPU from that though, If you was just trying to say about it generally don't go and quote back to a discussion with link about CPU details :/

And further to that, yes they can build them and yes it means basically working out how to call to multi-gpu and the API's software and games run on doesn't do that well. Just like they haven't for ages in terms of CPU even after having them. Doesn't mean that isn't the solution or not possible.

Common sense is that Xfire doesn't do well because it requires two complete GPU's to be brought, the cooling, the case, the motherboard all to support it which is not the same as a single GPU utilising chiplets at all in actual terms of what the end user needs to worry about and that is a massive difference.

From the same link:

GPU-to-GPU-chiplet-utilisation.png

https://www.reddit.com/r/AMD_Stock/comments/ehmg4c/an_interview_with_amds_cto_mark_papermaster/
 
Basically, devs would be dealing with Crossfire just on the card instead of across a PCI-E port, and they don't wanna.

This is basically complete ******** - game developers rarely interface with Crossfire or SLI directly - it isn't like an API you program for, there are things they can do to avoid complications with CF/SLI that stop your application/game working well with it but some features of modern rendering APIs you simply can't avoid breaking compatibility. If you are lucky AMD or nVidia might be able to produce a workaround or fix at driver level. Explicit multi-adaptor changes the story hugely because developers then can actually farm out their workload if they wish.

Common sense is that Xfire doesn't do well because it requires two complete GPU's to be brought, the cooling, the case, the motherboard all to support it which is not the same as a single GPU utilising chiplets at all in actual terms of what the end user needs to worry about and that is a massive difference.

It doesn't matter - 2 fully discrete GPUs or chiplets on the same interposer/substrate still the same story, some slight advantages to having a more direct connection but it won't fundamentally change the problems with CF/SLI.

Making GPUs/chiplets transparent for compute usage is relatively trivial - making them transparent for game rendering is a whole different story and takes a completely different approach.
 
What about the new feature in Nvidia drivers that works similar to checker-boarding - GPU 1 produces all uneven pixels and GPU 2 all even pixels. From what I've seen so far, it seems to work in most games already even though it's a hidden driver feature and not even ready for testing.
 
What about the new feature in Nvidia drivers that works similar to checker-boarding - GPU 1 produces all uneven pixels and GPU 2 all even pixels. From what I've seen so far, it seems to work in most games already even though it's a hidden driver feature and not even ready for testing.

It might increase potential GPU utilisation but ultimately you still have the same issues with regard to data being processed or stored on the other GPU to the one that needs it, etc.
 
This is basically complete ******** - game developers rarely interface with Crossfire or SLI directly - it isn't like an API you program for, there are things they can do to avoid complications with CF/SLI that stop your application/game working well with it but some features of modern rendering APIs you simply can't avoid breaking compatibility. If you are lucky AMD or nVidia might be able to produce a workaround or fix at driver level. Explicit multi-adaptor changes the story hugely because developers then can actually farm out their workload if they wish.



It doesn't matter - 2 fully discrete GPUs or chiplets on the same interposer/substrate still the same story, some slight advantages to having a more direct connection but it won't fundamentally change the problems with CF/SLI.

Making GPUs/chiplets transparent for compute usage is relatively trivial - making them transparent for game rendering is a whole different story and takes a completely different approach.

Not suggesting that but the actual end result in people buying two gpu over one and the perceived difference is huge there as well as elements such as ITX and MATX builds where one GPU is better suited in terms of actu building PC etc.

There a reason that a subset of enthusiasts were whom brought into xfire/sli. That was what I was referencing too rather than the direct requirements of the chiplet design.

So if you open up the chiplet requirement to whole market rather a small subset within a subset then traction could be gained for better API support etc.
 
30 percent faster than the 2080ti and about 50 percent in ray tracing and for that over a thousand pounds probably does anybody really care about ray tracing i know i don't i turn it off don't notice much difference worst gimmick since hairworks.
 
The visual difference is huge if developers fully implemented it and the performance was there - it is definitely not a tech that is going away.
 
I think it is the future and has been talked about for decades but not quiet there yet in implementation or hardware powerful enough (2 or 3 Gens from now IMO, counting the next new cards soon).
 
so roughly when can we expect ampere?

I have hard time believing 2nd quarter, maybe in early 2021. Simply because its unlikely AMD can match anything nvidia has this year, even with navi21.Don see any incentive for them to release it early when 2080ti et all are dominating the market currently.

I hope amd can squish nvidia but not happening, been waiting 10 years almost lol
 
Back
Top Bottom