two of them, Pro Vega II and Pro Vega II Duo (Duo is Apple only) https://www.amd.com/en/graphics/wor...=provegaii&utm_medium=redirect&utm_source=301
Last edited:
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
You HIGHLY underestimate the challenges of mGPU. This works because it's dedicated to compute tasks. Both AMD & Nvidia already off-loaded mGPU support to game developers going forward, and I don't see that changing.
Infinity Fabric on a GPU....
Finally IF connecting two GPUs.
The Mi50 and Mi60 You a which are the same as this VEGA duo already had infinity fabric.
This is AMD's version of NVLink. This just provides faster data transfer for Compute applications.
Looking at the PCB with what looks like a massive PLX chip by the PCIE slot the "infinity fabric" is is glorified internal CF bridge like what we have seen since the 290x.
Can grate cheese also as you know![]()
Many assumed that would happen at some point.
Got me thinking whether NV would go the dual GPU route again too given that RT has caused a slowdown in raw FPS increase this generation. Would be a good time to revive SLI.
Looks quite impressive albeit as pro cards.
When I saw the info this morning apple the Apple and 28 cores I did wonder why not AMD CPU, although would have to be 32 cores.
Nvidia doesn't have space to plug 2 Turing on a single PCB. Needs to move to HBM.
Nvidia doesn't have space to plug 2 Turing on a single PCB. Needs to move to HBM.
So are we saying this is just similar to previous 2xdie gpus like the pro duo and x295? Or this is new and could possibly act as one gpu if made consumer friendly?
So are we saying this is just similar to previous 2xdie gpus like the pro duo and x295? Or this is new and could possibly act as one gpu if made consumer friendly?
I don't at all.
They'd use an IO die (like with Zen 2) for consumer cards. That would be all the game engine and API would see. They don't on this product because it's intended for parallel compute stuff. The software / drivers to hook this up with the graphics API would be very easy.
IF is / was the challenge. It may still be re: consumer GPUs, as the latency may still not be low enough for gaming - we don't know yet.
Also, saying they've offloaded mGPU to developers is a bit disingenuous ... they've effectively abandoned it.
As soon as latency is good enough, we will see multi GPU consumer cards from AMD.
Nvidia has already moved to HBM with Pascal and Volta. Turing is gaming only and has ni need for expensive HBM
I don't at all.
They'd use an IO die (like with Zen 2) for consumer cards. That would be all the game engine and API would see. They don't on this product because it's intended for parallel compute stuff. The software / drivers to hook this up with the graphics API would be very easy.
IF is / was the challenge. It may still be re: consumer GPUs, as the latency may still not be low enough for gaming - we don't know yet.
Also, saying they've offloaded mGPU to developers is a bit disingenuous ... they've effectively abandoned it.
As soon as latency is good enough, we will see multi GPU consumer cards from AMD.
No HBM on Pascal.
NVidia have had the tech for many years to get SLI running well on a dual card, the GTX 690 is probably still the benchmark when it comes to having a hardware fix to deal with microstutter.
There is nothing to stop NVidia putting a couple of TU102 or 104 chips on the same PCB, these chips already use less power than their high end AMD counterparts.
The problem is the design of game engines since then has meant there is far more data interaction between frames. Deffered rendering is extremely hard to get working on SLI.
This is the reason both SLI and Xfire basically died at the same time. it is also the reason that any naive multi-gpu/chiplet type design simply wont scale.
NVidia have had the tech for many years to get SLI running well on a dual card, the GTX 690 is probably still the benchmark when it comes to having a hardware fix to deal with microstutter.
There is nothing to stop NVidia putting a couple of TU102 or 104 chips on the same PCB, these chips already use less power than their high end AMD counterparts.