• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The Nvidia Hopper thread

Soldato
Joined
18 May 2010
Posts
22,376
Location
London
AMD have said RDNA is like Ryzen 1 and will evolve getting better over iterations.

Let the Ryzen metaphor wars begin :D

That's not accurate tho is it.

If RDNA was a Ryzen moment in the GPU space, it would be as good as the top Nvidia cards at 50% less price. Nvidia would be reeling like AJ against Ruiz which simply has not happened.
 
Caporegime
Joined
17 Mar 2012
Posts
47,604
Location
ARC-L1, Stanton System
That's not accurate tho is it.

If RDNA was a Ryzen moment in the GPU space, it would be as good as the top Nvidia cards at 50% less price. Nvidia would be reeling like AJ against Ruiz which simply has not happened.

I don't entirely agree with the metaphor but RDNA is far better than GCN, 2560 RDNA Shaders are as good as 4096 of the latest iteration of GCN, in fact rasterization is multiple factors better, it is a far better architecture and it should get even better over newer iterations.

Nvidia tho, are a far tougher nut to crack than Intel, Nvidia's GPU architecture, lets be honest here, is excellent, AMD do have a lot of catching up to do.
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
This has nothing to do with GPUs or graphics.

It has.

Navi is marketed as capable of returning the "Scalability" which is not present with Vega:

Navi-Roadmap-AMD.png



And also, everyone talks about it:

Considering GCN can't scale past 64 ROPs, would it be possible to run them at a multiple of the core?
Yeah I didn't hear anything about ROP's, I only heard about 4096 shaders limit, that's why non of GCN ever get above that, the only way to go above is to be X2 (eg, Crossfire in a single card like the Fury X2).
https://www.reddit.com/r/Amd/comments/8vu5kd/considering_gcn_cant_scale_past_64_rops_would_it/

GCN GPUs are limited to a maximum of 64 Computer Units with a total of 4096 shaders, for example, thanks to the limitations of the shader engine.
https://www.game-debate.com/news/27...raphics-cards-using-new-rdna-gpu-architecture

Vega shader count has almost no impact on gaming performance
https://linustechtips.com/main/topi...t-has-almost-no-impact-on-gaming-performance/
 
Man of Honour
Joined
13 Oct 2006
Posts
91,118
Sli / crossfire issues aren't applicable to chiplet architecture. That's nonsense.

If you simply take a monolithic GPU and shrink it down chiplet style as if transposing the approach like with Zen but to GPUs you will still have all the SLI/CF issues even with shared memory resources, etc. no ifs or buts on that one.

MCM opens up other possibilities though - especially if you can pursued game developers to program effectively for explicit multi-adaptor - such as having a bunch of "headless" packages rather than full GPU packages that allow you to create virtual GPUs by tying resources together in software and redistribute things like SM resources on the fly (though some overhead issues to overcome there) based on load to alleviate multi GPU shortcomings or even go another way altogether and simply unfold the whole architecture over multiple packages in such a way that you aren't limited by current single package sizes and less impacted by yields to produce an overall more powerful GPU than you could in a single package.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,118
No you won't. The system wouldn't need to see multiple gpus. There wouldn't have to be any mirroring of video ram. You can't just say there would be Sli sytle issues, that's plainly incorrect.

Depends how you are meaning - a lot of people talk about MCM/chiplets in the manner of how multiple Zen packages are tied together but with lots of miniature GPUs but that doesn't solve having to deal with SLI problems with gaming use no matter whether that is at a system level or moving the problem to hardware - there is no magic way of having the system see it as one GPU without just shifting where you deal with the problem and all the same inadequacies.

Potentially it is solvable using some kind of network on chip/package like approach where the resources of a package on the substrate aren't tied to any one logical GPU exclusively or moving away from a discrete chiplet approach to something more complex.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,360
Location
kent
Well if NVidia are heading in that direction, they have obviously found a way to do it.
There is no way they would be heading down that route if they had to rely on currant SLI or multi adapter techniques.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,118
Well if NVidia are heading in that direction, they have obviously found a way to do it.
There is no way they would be heading down that route if they had to rely on currant SLI or multi adapter techniques.

There are a variety of techniques they could use - what I'm talking about though is the way many people seem to imagine it - something like the right hand part of the diagram here:

https://techreport.com/news/32189/nvidia-explores-ways-of-cramming-many-gpus-onto-one-package/

Which is good for compute but still has all the issues of SLI for gaming usage.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
No you won't. The system wouldn't need to see multiple gpus. There wouldn't have to be any mirroring of video ram. You can't just say there would be Sli sytle issues, that's plainly incorrect.


if ypu designed a multi-gpu die in the way ryzen uses chiplets you absolutely will have SLI issues and the developer will have to code for SLI . just line on ryzen if you want to use all chiplets you have to explicitly code for all of them and handle complex multi-threading.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
Well if NVidia are heading in that direction, they have obviously found a way to do it.
There is no way they would be heading down that route if they had to rely on currant SLI or multi adapter techniques.

nvidia and amd are going that way for HPC
 
Soldato
Joined
17 Aug 2003
Posts
20,158
Location
Woburn Sand Dunes
if ypu designed a multi-gpu die in the way ryzen uses chiplets you absolutely will have SLI issues and the developer will have to code for SLI . just line on ryzen if you want to use all chiplets you have to explicitly code for all of them and handle complex multi-threading.

If nVidia split the shaders up in to 4 seperate chiplets, just as an example, why you wouldnt have to do anything different to what's done now?
 
Back
Top Bottom