• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon pro Vega 2

Associate
Joined
24 Nov 2010
Posts
2,314
You HIGHLY underestimate the challenges of mGPU. This works because it's dedicated to compute tasks. Both AMD & Nvidia already off-loaded mGPU support to game developers going forward, and I don't see that changing.

I don't at all.

They'd use an IO die (like with Zen 2) for consumer cards. That would be all the game engine and API would see. They don't on this product because it's intended for parallel compute stuff. The software / drivers to hook this up with the graphics API would be very easy.

IF is / was the challenge. It may still be re: consumer GPUs, as the latency may still not be low enough for gaming - we don't know yet.

Also, saying they've offloaded mGPU to developers is a bit disingenuous ... they've effectively abandoned it.

As soon as latency is good enough, we will see multi GPU consumer cards from AMD.
 
Soldato
Joined
20 Apr 2004
Posts
4,365
Location
Oxford
Looking at the PCB with what looks like a massive PLX chip by the PCIE slot the "infinity fabric" is is glorified internal CF bridge like what we have seen since the 290x.

If this was a card with a single GPU package but two dies and two sets of HBM2 it be a little different. Still looks like a cool bit of engineering. Can't wait till someone rips one out to the machine to have a closer look
 
Don
Joined
19 May 2012
Posts
17,185
Location
Spalding, Lincolnshire
Infinity Fabric on a GPU....
Finally IF connecting two GPUs.

The Mi50 and Mi60 You a which are the same as this VEGA duo already had infinity fabric.

This is AMD's version of NVLink. This just provides faster data transfer for Compute applications.

https://www.amd.com/system/files/documents/radeon-instinct-mi60-datasheet.pdf

Exactly...


Looking at the PCB with what looks like a massive PLX chip by the PCIE slot the "infinity fabric" is is glorified internal CF bridge like what we have seen since the 290x.

It's a PLX chip just to divide the PCI-E lanes to support 2 GPU chips without any other modifications
 
Associate
Joined
31 Dec 2008
Posts
2,284
Can grate cheese also as you know :D

If it's in the new Mac Pro than it surely can.:p

cheese-grater.jpg
 
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
Many assumed that would happen at some point.
Got me thinking whether NV would go the dual GPU route again too given that RT has caused a slowdown in raw FPS increase this generation. Would be a good time to revive SLI :).

Looks quite impressive albeit as pro cards.

When I saw the info this morning apple the Apple and 28 cores I did wonder why not AMD CPU, although would have to be 32 cores.

Nvidia doesn't have space to plug 2 Turing on a single PCB. Needs to move to HBM.
 
Soldato
Joined
28 May 2007
Posts
10,070
So are we saying this is just similar to previous 2xdie gpus like the pro duo and x295? Or this is new and could possibly act as one gpu if made consumer friendly?

It's more like Nvlink but done on one board. For gaming i would imagine it would just be like crossfire if not slightly better just link Nvlink is to sli.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
So are we saying this is just similar to previous 2xdie gpus like the pro duo and x295? Or this is new and could possibly act as one gpu if made consumer friendly?


Exactly the same as the previous dual GPUs, just using a modern interconnect.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
I don't at all.

They'd use an IO die (like with Zen 2) for consumer cards. That would be all the game engine and API would see. They don't on this product because it's intended for parallel compute stuff. The software / drivers to hook this up with the graphics API would be very easy.

IF is / was the challenge. It may still be re: consumer GPUs, as the latency may still not be low enough for gaming - we don't know yet.

Also, saying they've offloaded mGPU to developers is a bit disingenuous ... they've effectively abandoned it.

As soon as latency is good enough, we will see multi GPU consumer cards from AMD.



You couldn't be more wrong if you tried. Seriously, read some.of the publication in chiplet Based GPUs Nvidia have some great research available from their website for example.

A fast interconnect is just 1 small.aspect of the required changes. The entire GPU architecture will need to be vastly different, as will the APIs and game engines. There are also fundamental limits in cache latency and data coherency that cannot be solved at current fabrication sizes. We are looking at 5nm at least before MGPU design a become viable, and that will only be for HPC application with huge data sets. Gaming provides very different workloads
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
I don't at all.

They'd use an IO die (like with Zen 2) for consumer cards. That would be all the game engine and API would see. They don't on this product because it's intended for parallel compute stuff. The software / drivers to hook this up with the graphics API would be very easy.

IF is / was the challenge. It may still be re: consumer GPUs, as the latency may still not be low enough for gaming - we don't know yet.

Also, saying they've offloaded mGPU to developers is a bit disingenuous ... they've effectively abandoned it.

As soon as latency is good enough, we will see multi GPU consumer cards from AMD.

NVidia have had the tech for many years to get SLI running well on a dual card, the GTX 690 is probably still the benchmark when it comes to having a hardware fix to deal with microstutter.

There is nothing to stop NVidia putting a couple of TU102 or 104 chips on the same PCB, these chips already use less power than their high end AMD counterparts.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
NVidia have had the tech for many years to get SLI running well on a dual card, the GTX 690 is probably still the benchmark when it comes to having a hardware fix to deal with microstutter.

There is nothing to stop NVidia putting a couple of TU102 or 104 chips on the same PCB, these chips already use less power than their high end AMD counterparts.


The problem is the design of game engines since then has meant there is far more data interaction between frames. Deffered rendering is extremely hard to get working on SLI.

This is the reason both SLI and Xfire basically died at the same time. it is also the reason that any naive multi-gpu/chiplet type design simply wont scale.
 
Associate
Joined
29 Jun 2016
Posts
529
The problem is the design of game engines since then has meant there is far more data interaction between frames. Deffered rendering is extremely hard to get working on SLI.

This is the reason both SLI and Xfire basically died at the same time. it is also the reason that any naive multi-gpu/chiplet type design simply wont scale.

DP is right on this one. CPUs are vastly different architectures to GPUs. GPUs don't lend themselves to being split into chiplets, too many wide interfaces that are sensitive to latency...
 
Associate
Joined
24 Nov 2010
Posts
2,314
NVidia have had the tech for many years to get SLI running well on a dual card, the GTX 690 is probably still the benchmark when it comes to having a hardware fix to deal with microstutter.

There is nothing to stop NVidia putting a couple of TU102 or 104 chips on the same PCB, these chips already use less power than their high end AMD counterparts.

But then you still have problems with compatibility with engines and programming, and some won't support it at all. Plus, it didn't eliminate micro stutter entirely .. and also, for reasons unclear to me, NVIDIA had far, far worse scaling in most games than AMD (maybe more recent NVIDIA architectures would have rectified the latter).

The whole point of mGPU (at least in consumer) in the future is to have several (not just 2) small dies seen as a number of 'cores', per Zen, not discrete GPUs. Then all the compatibility issues go away, and you can potentially use quite a few much smaller chips which will be much lower cost and higher yield.

AMD have clearly made some very large strides with regard to latency on Zen 2 and the dedicated, separate IO die, but whether that is going to be good enough for GPUs in gaming situations isn't clear.
 
Back
Top Bottom