• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
I'll be very surprised if we see a system that can make multiple GPU cores appear as one with Navi let alone Vega for gaming purposes - there are just so many levels of problems involved when it comes to gaming versus something like using multiple GPUs for compute tasks its almost impossible to solve in hardware with current architectures and would need a paradigm shift in hardware approach to be realistically feasible.
 
I'll be very surprised if we see a system that can make multiple GPU cores appear as one with Navi let alone Vega for gaming purposes - there are just so many levels of problems involved when it comes to gaming versus something like using multiple GPUs for compute tasks its almost impossible to solve in hardware with current architectures and would need a paradigm shift in hardware approach to be realistically feasible.

It'll happen in the future, as it's the only way to avoid issues with Moore's Law. I just don't see it happening within in the next 2-3 years at a minimum.
 
Yeah not ruling out it happening eventually its just a very complex and resource demanding hurdle to get it working that just isn't worthwhile until you actually are limited by things like Moore's law.
 
I'll be very surprised if we see a system that can make multiple GPU cores appear as one with Navi let alone Vega for gaming purposes - there are just so many levels of problems involved when it comes to gaming versus something like using multiple GPUs for compute tasks its almost impossible to solve in hardware with current architectures and would need a paradigm shift in hardware approach to be realistically feasible.

As i said to D.P no one said it was easy, but don't dismiss it out of hand because its difficult, not one us knows how long anyone has been working on solving this problem, i do know ATI back when they still were ATI said they wanted CF scaling to be independent entirely from software and Game developers.
Its not a new idea, its a very old one so the chances are AMD 'who now own what was ATI' have been working on this for more than a decade, Making CPU's act as one is not the same thing from making multiple GPU's act as one but they are also not completely different things, at the very least success with this in Ryzen has laid the ground work.
 
Genuine question here guys since we are talking about mgpu.

How come crossfire worked so well a few years back to where we are todsy when all of a sudden it's a incredibly difficult thing to do? I know it's a shambles now, but 2 years and more ago it was the best it's ever been, so many titles just worked out of the box just by ticking the little box in CCC which said something along the lines of apply crossfire to any game that doesn't have a profile. When I had 7950 crossfire, gaming was just fantastic with very little stress. It just worked all the rime.
 
Oh stop with the "i'm smarter than you" ^^^^^ you're not, you use that card with everyone who disagrees with you, its getting old and tiresome.
There is nothing to be misunderstood in "multiple GPU's acting as one" get real. no matter how complex you try and make it to then claim intellectual arguments won by trying to bamboozle people; what he said was utterly simple.

You talk nothing but gibberish and claim its because you're smarter than everyone, when in actual fact the subject and what was said is very simple.


Stop making statements that makes it clear you don't actually understand and then people wont appear more intelligent than you.

"multiple GPU's acting as one"is such an empty statement. Crossfire drivers make multiple GPUs act as one, so what. If you actually want scalability then you need to load balance.
 
Genuine question here guys since we are talking about mgpu.

How come crossfire worked so well a few years back to where we are todsy when all of a sudden it's a incredibly difficult thing to do? I know it's a shambles now, but 2 years and more ago it was the best it's ever been, so many titles just worked out of the box just by ticking the little box in CCC which said something along the lines of apply crossfire to any game that doesn't have a profile. When I had 7950 crossfire, gaming was fantastic on crossfire.

Things like pixel shaders happened - multi GPU runs into huge issues i.e. latency penalties that would actually result in less performance than a single GPU when one frame or part of a frame requires data to render that is on another GPU. The more complex a rendering API the easier it is to break compatibility with multi GPU functionality.
 
Genuine question here guys since we are talking about mgpu.

How come crossfire worked so well a few years back to where we are todsy when all of a sudden it's a incredibly difficult thing to do? I know it's a shambles now, but 2 years and more ago it was the best it's ever been, so many titles just worked out of the box just by ticking the little box in CCC which said something along the lines of apply crossfire to any game that doesn't have a profile. When I had 7950 crossfire, gaming was fantastic on crossfire.

As Roff said, and AMD's driver team shifted priorities towards sorting out single GPU performance, as they essentially stopped making big Dual cards to compete with NVIDIA.

At the same time, they probably hoped DX12 and Vulkan would be adopted quicker, since explicit multi-adapter is superior in performance and frame times; with minimum driver involvement.
 
Things like pixel shaders happened - multi GPU runs into huge issues i.e. latency penalties that would actually result in less performance than a single GPU when one frame or part of a frame requires data to render that is on another GPU. The more complex a rendering API the easier it is to break compatibility with multi GPU functionality.

Thanks guys that makes sense.
 
As i said to D.P no one said it was easy, but don't dismiss it out of hand because its difficult, not one us knows how long anyone has been working on solving this problem, i do know ATI back when they still were ATI said they wanted CF scaling to be independent entirely from software and Game developers.
Its not a new idea, its a very old one so the chances are AMD 'who now own what was ATI' have been working on this for more than a decade, Making CPU's act as one is not the same thing from making multiple GPU's act as one but they are also not completely different things, at the very least success with this in Ryzen has laid the ground work.

I never dismissed it out of hand, quite the opposite:
While I agree that utilizing multiple smaller GPU dies on some kind of interposer with shared memory has many benefits,


I think a mutl-die solution for GPUs is absolutely the future, but Infinity fabric doesn't make that magically happen and not even navi will do that seamlessly.
 
Last edited:
Genuine question here guys since we are talking about mgpu.

How come crossfire worked so well a few years back to where we are todsy when all of a sudden it's a incredibly difficult thing to do? I know it's a shambles now, but 2 years and more ago it was the best it's ever been, so many titles just worked out of the box just by ticking the little box in CCC which said something along the lines of apply crossfire to any game that doesn't have a profile. When I had 7950 crossfire, gaming was just fantastic with very little stress. It just worked all the rime.

Probably because with Game Engines and API's getting more and more complex it becoming increasingly difficult to tie multiple GPU's together through that software.
 
Thanks, that makes sense.

Its not quite accurate but to give an example - say you are rendering a reflection (which usually portrays a different part of the scene than where it is rendered) that is split across 2 different parts of the screen with a different card working on each part - you either end up duplicating the workload on each card to render the reflection or render the reflection on one card while the 2nd card waits for it to be done then copies it across. (There are actually ways around this and the challenge is a bit different with AFR but it serves as an illustration of the challenge).

Older games would have either lacked reflections or just used a pre-computed environment map that just looked vaguely like the scene rather than computed in realtime from the actual scene and so would be immediately available to both GPUs.
 
Things like pixel shaders happened - multi GPU runs into huge issues i.e. latency penalties that would actually result in less performance than a single GPU when one frame or part of a frame requires data to render that is on another GPU. The more complex a rendering API the easier it is to break compatibility with multi GPU functionality.


Other things as well such as the deferred rendering used in some modern game engines just doesn't play nicely with multi-gpu. And in general, modern game engine design have far more data dependencies. Thing like Global illumination means the rendering of one object will strongly influence the rendering of the another.
 
Probably because with Game Engines and API's getting more and more complex it becoming increasingly difficult to tie multiple GPU's together through that software.

Aye, I find that in same cases Multi-GPU is now as bad as it was when I had my 7950GX2.

The 4870X2 was my pride and joy for years, and AMD was on such a great roll for multi-gpu performance then, and until Hawaii.
 
I never dismissed it out of hand, quite the opposite:
I think a mutl0die solution for GOUS is absoltuely the future, but Infiinity fabric doesn't make that magcially happen nad not even navi will do that seemlesly.

Now he has a crystal ball.

Tell Raja not to bother, you have fore seen it all :D

I think your worry is that it will, wrong company getting to it too soon, as it always is with you throughout this thread.
 
Its not quite accurate but to give an example - say you are rendering a reflection (which usually portrays a different part of the scene than where it is rendered) that is split across 2 different parts of the screen with a different card working on each part - you either end up duplicating the workload on each card to render the reflection or render the reflection on one card while the 2nd card waits for it to be done then copies it across. (There are actually ways around this and the challenge is a bit different with AFR but it serves as an illustration of the challenge).

Older games would have either lacked reflections or just used a pre-computed environment map that just looked vaguely like the scene rather than computed in realtime from the actual scene and so would be immediately available to both GPUs.

Understand it completely, in simple terms its just harder now due to more complex methods.
 
Infinity fabric is a really fast interconnect. Fast enough for CPUs to synchronize their L1 caches with. Think about this for a second: when a CPU issues a cache fence instruction (store/load fence) it ends up exchanging messages with other CPU cores. We are talking about the fastest thing after direct register-file access.

Now, there are many ways you could use it in GPUs. One way would be to use it instead of PCI and rely on CrossFire software or mGPU/DX12, but that is really not new: you will still have the current issues of SLI/CrossFire (maybe less stuttering) or you will have for devs to write specifically for it.

The second way would be to create a single GPU as an MCM (multi chip module) same as Ryzen/Naples. Again here there are many ways to go about it:

1) You can create a single "control" module with a shared memory controller, geometry engine, hbcc, hardware scheduler, etc. Then you create multiple modules with the shaders. You use infinity fabric to connect them because it is that damn fast. This way you have a single GPU so software is not affected at all, but it is dirt-cheap to make because you produce small shader modules (e.g. 512 shaders in each) and can build cards with many of them. a 4096 shader card would have 8 x 512-shader-modules. A 2048 card would have 4 such modules, etc. The cost per shader module is dirt-cheap because they are small so yields should be astronomical. So you have a single shader-module design and maybe 2 control-module designs (high end / low end because ROPs, TMUs etc are not one-size-fits-all) and you mix and match.

2) You create a multi-chip card where the "control module" is smart enough to synchronize HBCC caches with other "control modules". I believe this is the approach they are taking because that's what the HBCC does: it's similar to Ryzen's memory controller and infinity fabric takes care of consistency and moving data around as each chip works on it. This is closer to having 2 "traditional GPUs" on the same MCM which act as one (share the same memory address space and let HBCC manage data movement).

3) You do both.

If they can get this to work (and I don't see why, they managed to get it working with Ryzen) then Navi will be dirt-cheap to make even in 6144 shader configuration.
 
Oh stop with the "i'm smarter than you" ^^^^^ you're not, you use that card with everyone who disagrees with you, its getting old and tiresome.
There is nothing to be misunderstood in "multiple GPU's acting as one" get real. no matter how complex you try and make it to then claim intellectual arguments won by trying to bamboozle people; what he said was utterly simple.

You talk nothing but gibberish and claim its because you're smarter than everyone, when in actual fact the subject and what was said is very simple.

And yes i know enough about it to know gibberish when i see it.



Same thing, so again...

It's called pseudointellectualism.
 
Here is a video of cycles render engine, rendering a scene using 2 Nvidia GPUs (of different make). https://youtu.be/dreR2z8Kgyk?t=6m34s
The cards don't need to be in SLI to function they just need to have drivers installed.

The scene is copied on to both GPUs and rendered out. From memory, cycles works by firing a "photon" and bouncing it around the scene till it hits the camera. That means that parts of the image that haven't been rendered out can affect the reflection in a scene that is currently being rendered. I must admit i'm not entirely sure how it all works from a coding perspective however it does show that it is possible for 2 cards to render out different parts of a scene simultaneously.

The biggest issue seems to be distributing the workload and syncing up memory. HBCC could potentially take care of the memory issue. As for distributing workload, either AdoredTV or NerdTechGasm, mentioned that with the NCU coming in vega should have an improved loadbalancing system which was one of the things which bottlenecked the fury cards.

Edit: Found the video. Watch till 16:25
https://youtu.be/m5EFbIhslKU?t=13m33s

Look at the AMD slide for improved load balancing. I believe that each row after the intelligent workgroup distributor block refers to geometry process engine. If you keep watching he says that GCN can support a maximum of 4 these.

so a few thinks to consider why haven't they shown only 4 rows. It would fit in the slide. A few theories.

1. Vega has more than 4 shader engines and AMD didn't want to reveal this.
2. The new IWD can scale to as many geometry engines as present
3. AMD just thought it looked better and it means nothing
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom