• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Interposers, Chiplets and...ButterDonuts?

Another good video. Very technical this time but still easy enough to understand. AMD have a big advantage over Intel and Nvidia on this front. It has happened on CPU's (Epyc and Threadripper) with the interposter being populated too with transistors, I can see GPU's going the same way.
 
I remember getting flamed on here for creating a thread for all these videos so discussion could remain in one place.

Now in return we get a new thread with a new video lol
 
I'm sure the usual suspects will be a long soon to dismiss everything he has to say!

Good video though!
 
AMD have a big advantage over Intel and Nvidia on this front.

IMO not so much as people like to think - both Intel and nVidia have been doing research on these things going back years and years - Intel has massive amounts of knowledge "banked" on the subject - more than either AMD or nVidia combined. Intel might be stung a bit by lack of practical application as I don't think they've done much since 2007 but nVidia has massive simulation labs, etc. where a lot of stuff is tested so will likely be less hampered in that regard.
 
According to David Wang, the new SVP of engineering for AMD’s Radeon Technologies Group (RTG)

We are looking at the MCM type of approach, but we’ve yet to conclude that this is something that can be used for traditional gaming graphics type of application.”
To some extent you’re talking about doing CrossFire on a single package,” The challenge is that unless we make it invisible to the ISVs [independent software vendors] you’re going to see the same sort of reluctance
We’re going down that path on the CPU side, and I think on the GPU we’re always looking at new ideas. But the GPU has unique constraints with this type of NUMA [non-uniform memory access] architecture, and how you combine features... The multithreaded CPU is a bit easier to scale the workload. The NUMA is part of the OS support so it’s much easier to handle this multi-die thing relative to the graphics type of workload
That’s gaming In professional and Instinct workloads multi-GPU is considerably different, we are all in on that side. Even in blockchain applications we are all in on multi-GPU. Gaming on the other hand has to be enabled by the ISVs. And ISVs see it as a tremendous burden


https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1



Don't hold your breath.

Intel, Nvidia, AMD and a host of others have been researching this kind of stuff for 10-20 years.

Nvidia have a lot of research on MCM GPUs available on their website, e.g.
http://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf
 
The other thing is from a consumer perspective MCm is not that interesting. The basis is multiple smaller dies have higher yield than large monolithic dies, and thus production costs are lowed. But increases in R&D, reduced economies of scale (e.g instead of 1 million identical chips you make 100K * 10 different chips, more expensive interposes, the requirement for much more cache, and more expensive manufacturing large interposes and mounting multiple chips, whatever gains in chip yields could be lost by manufacturing failures at mounting).

If all goes well you have a slightly cheaper to produce GPU for the same performance, or can scale to higher performance for same manufacturing cost. Differences wont be large. You then have the issue that a cheaper to make GPU might simply lead to higher profit margins rather than cheaper products.

For HPC and professional use there is a different set of advantages. nvidia pushed the boundaries with an 800mm^2 chip, an MCM design could have 8x300mm chips and Nvidia could sell this for 50K a pop. Gettig the performance expected form HPC software is massively easier than in gaming.


it certaily doesn't help with performance or power efficiency. putting together 5 small inefficient chips wont make things any better than 1 large inefficient chip. In fact,. the efficiency of an MCM design is almost always worse than a monolithic chip.
 
Interesting video. Obvious, but the binning of the chips to be "glued" together had obvious advantages to the monolithic approach.
 

His comment on nVidia's gimping the GT 1030 with DDR4 was a disgusting practice because it hurts those who can least afford to be affected by the practice the most was bang on! i agree, its about as low as you can get.
 
His comment on nVidia's gimping the GT 1030 with DDR4 was a disgusting practice because it hurts those who can least afford to be affected by the practice the most was bang on! i agree, its about as low as you can get.

The performance hit as Gamers Nexus showed from GDDR5 to DDR4 on the GT 1030 is reduced by half or more, this on a card where the GDDR5 version is already only just usable as a gaming card, the DDR4 version now renders it utterly useless to play games with and for around the same money, the GT 1030 has already been widely reviewed, they are both called the GT 1030.
Doing this with the GTX 1060 where one has 10% less shaders is one thing, this is really quite cynical and nasty, for what, a few extra $ on every useless one sold? this is what you get when a company has total dominance.
 
According to David Wang, the new SVP of engineering for AMD’s Radeon Technologies Group (RTG)




https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1



Don't hold your breath.

Intel, Nvidia, AMD and a host of others have been researching this kind of stuff for 10-20 years.

Nvidia have a lot of research on MCM GPUs available on their website, e.g.
http://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf

People are concentrating far too much on just taking monolithic cores, chopping off some minor peripheral circuitry and sticking multiples of them on the same substrate - which is where CPUs are going but isn't the end game for GPUs (and as per the quote from AMD you still have the same issues with AFR, etc. trying to tie them up that way).

While nVidia's diagram shows something along those lines what they are looking at in the longer run is actually dissecting the functionality of a monolithic GPU (in some ways revisiting the early days of GPUs) and laying that out on an interposer or substrate so that for instance you move much of the command functionality into a block of its own that can interface with "headless" GPU packages that are themselves discrete so you can scale up or down the graphics card in a much more arbitrary fashion and not be restricted by the current thermal/power footprint, requirements of tying together output from multiple monolithic packages (which requires SLI/Crossfire type functionality) and better resilience against poor yields. Recent advances in substrate technology will enable short enough path lengths in combination with advanced 7nm and below processes.

I don't expect to see MCM designs of this nature in the next generation of releases and possibly not even the generation after that but I do expect that these generations will have changes in approach in terms of architecture implementation towards that eventual goal.
 
Yeah he gets a patreon from me as well.

The only downside is he releases a video every 2 weeks or so. Would love if he had the resources to release a video every 4 or 5 days. Quality might suffer though with that volume. You just have to see Linus Tech to see how quality drops when shoving out a video a day. Linus is now pure commercial guff. The pc world/curry's of YouTube.
 
His comment on nVidia's gimping the GT 1030 with DDR4 was a disgusting practice because it hurts those who can least afford to be affected by the practice the most was bang on! i agree, its about as low as you can get.

+1

At least give the card a different name, as they should've done with the 1060.
 
Yeah another insta watch video by Jim.
I truly like the amount of research and work puts behind those technical videos hence I am sponsor him through Patreon.

What I do not get, is the muppets down-voting technical videos :mad:
It’s probably Gregster :p

Where is he by the way? Not seen him around since he got suspended. His suspension was only for a couple of days though. Hopefully he comes back soon :)
 
Yeah another insta watch video by Jim.
I truly like the amount of research and work puts behind those technical videos hence I am sponsor him through Patreon.

What I do not get, is the muppets down-voting technical videos :mad:


Jim obviously doesn't bother reading what AMD themselves say about MCM.
 
Back
Top Bottom