• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Multi-GPU Vendor Support Confirmed

Neevvveeeer goinnnnggg to happppeeeen.

At most you will get multi indifferent branded GPU support for desktop/2D work. For gaming....if it happens I will eat EVERYBODY'S hats.
 
Theoretically if they both support a standard feature in a standard way there is no reason why an abstraction layer couldn't be utilised to tie their capabilities into one pool.

Theory goes out the window quick though.

Both Radeon and GeForce are DX compliant yet both have to re-write shaders for every major game as well as all kinds of other vendor-specific tweaks we have come to demand as "minimum acceptable effort". I fully expect this to continue into the future until 5-6 years from now when the next API hype train comes along and this conversation kicks up again.
 
I am sure NVidia will be bring out a patch in future drivers to fix a err problem cause by running their cards with AMD cards.

The only thing NVidia have got left to do is decide what the problem to be fixed should be.:D
 
Neevvveeeer goinnnnggg to happppeeeen.

At most you will get multi indifferent branded GPU support for desktop/2D work. For gaming....if it happens I will eat EVERYBODY'S hats.

It's already happened, dude.

http://en.wikipedia.org/wiki/Hydra_Engine

It just didn't catch on because same old - no one supported it. But it can easily be done, trust me on that.

*IF* DX12 is unified then it wouldn't be hard at all making different brand GPUs scale so long as they're running the same API and support the same instructions. It works in OGL, let's put it that way.

Edit. Note the word several GPUs.

Not two, three or four, but several. There's no reason on earth why DX12 can not unify GPU tech and spread the load over as many as you can fit.
 
Last edited:
Theory goes out the window quick though.

Both Radeon and GeForce are DX compliant yet both have to re-write shaders for every major game as well as all kinds of other vendor-specific tweaks we have come to demand as "minimum acceptable effort". I fully expect this to continue into the future until 5-6 years from now when the next API hype train comes along and this conversation kicks up again.

The reason for this, is because AMD and nVidia have different implementations when it comes to their abstraction layers for memory managment, rendering paths and error catching within their drivers. But with Dx12,Vulkan etc. With the driver being a dumb abstraction layer, different Gpus should be seen as being identical to a programmer, with the only difference being performance and a small amount of feature based differences, as long as the driver conforms to spec.

Gpu's are essentially being seen like cpus with dx12,vulkan etc. All x86 cpus are seen the same by the programmer, it comes down to differences in extraneous features and performance.
 
Nvidia don't even let different mem amount GPUs SLI, I don't think we will see AMD + Nvidia working on the same scene.

That is because of a limitation of current Directx and their need to use Alternate Frame Rendering for best performance, which only works at its best with matched cards, even then the performance is not as good as it can be.

SLI and crossfire are bolted on top of Directx within the driver, where as multi gpu rendering with DX12, Vulkan, etc. The application directly controls the GPU's so the graphics engine can directly handle a variety of different Multi GPU rendering methods.
 
Nvidia don't even let different mem amount GPUs SLI, I don't think we will see AMD + Nvidia working on the same scene.

From what I understand DX12 will not care what GPU it sees. All it will see is a GPU, brand irrelevant.

Nvidia and AMD don't have to do anything to make this work, that's down to Microsoft.

Obviously over the years they've been collecting ideas for it and Hydra was a damn good idea it just never caught on because it was proprietary and you had to buy a motherboard with a lucid chip on.

But it did work. IIRC TTL tested it with four GPUs. Two Nvidia, two ATI and it worked fine when supported properly.
 
From what I understand DX12 will not care what GPU it sees. All it will see is a GPU, brand irrelevant.

Nvidia and AMD don't have to do anything to make this work, that's down to Microsoft.

Obviously over the years they've been collecting ideas for it and Hydra was a damn good idea it just never caught on because it was proprietary and you had to buy a motherboard with a lucid chip on.

But it did work. IIRC TTL tested it with four GPUs. Two Nvidia, two ATI and it worked fine when supported properly.

that isn't quite right
DX12 is low level, which means it lays directly in the developers hands what commands to use and when... it won't be down to nvidia, or AMD, or even microsoft to get it working, it will be squarely in the developers hands, which is why some people are being sceptical

DX11 was highly abstracted, meaning it didn't care what GPU was on the other end, it just blindly spat commands out to the drivers, but DX11 itself was a black box, so it left a lot of optimisation work to be done by driver teams intercepting the incoming DX calls and resequencing them in to a set of instructions that made more sense for their hardware

with DX12 being closer to the metal it should be the exact opposite of what you just said, the game developer will need to know what the hardware is and code accordingly

MS could build up a decent set of example code and tools to make this easier, but ultimately performance issues are going to land squarely at the feet of games developers (which is what they said they wanted despite years of many of the biggest developers completely shrugging their shoulders of basic optimisation)

DX12 was described as having a very low level of abstraction, for best performance... the idea of a single pool of GPU resource based on any and all GPU hardware inserted in a system is a very high level of abstraction, which seems at odds with the whole premise of DX12

it does sound plausible that you could use split frame rendering and send as small a chunk as a GPU was capable of to it, but the performance gain probably wouldn't be anywhere near the maximum you could otherwise get from that particular GPU, but it shouldn't be worse than not having it
 
Last edited:
DX12 was described as having a very low level of abstraction, for best performance... the idea of a single pool of GPU resource based on any and all GPU hardware inserted in a system is a very high level of abstraction, which seems at odds with the whole premise of DX12

it does sound plausible that you could use split frame rendering and send as small a chunk as a GPU was capable of to it, but the performance gain probably wouldn't be anywhere near the maximum you could otherwise get from that particular GPU, but it shouldn't be worse than not having it

Any Gpu that runs DX12 should be seen no differently when it comes to the majority of DX12 commands, regardless of underlying hardware, due to the API. So one set of code should run equally well on both cards, but only be dependent on the performance of the underlying hardware to determine execution time. Similar to an x86 application that is using the same supported extensions on an AMD or Intel processor, the underlying hardware determines the performance while using the same code.

The reason that DX11 and under cards are not like this, is because of the different implementations of memory management and rendering pathways within the driver, which is different for both manufacturers, this is why one set of code might run better on one manufacturers hardware than the other and why they have to implement fixes and even completely replace shader code to make it run better with their drivers.
 
Will be interesting for mGPU when new interconnects come. Imagine having bandwidth in the hundreds of GB/s rather than the piddling 32GB/s we almost have now. This will open up new applications and probably new rendering methods.
 
Would be interesting to see a CPU having a decent amount of HBM on die as fast "near" memory, or even to replace the "Far memory" in the ram slots with HBM or HMC, but with higher latencies in comparison etc.
 
Any Gpu that runs DX12 should be seen no differently when it comes to the majority of DX12 commands, regardless of underlying hardware, due to the API. So one set of code should run equally well on both cards, but only be dependent on the performance of the underlying hardware to determine execution time. Similar to an x86 application that is using the same supported extensions on an AMD or Intel processor, the underlying hardware determines the performance while using the same code.

The reason that DX11 and under cards are not like this, is because of the different implementations of memory management and rendering pathways within the driver, which is different for both manufacturers, this is why one set of code might run better on one manufacturers hardware than the other and why they have to implement fixes and even completely replace shader code to make it run better with their drivers.

Do you have direct experience of this? or is this "glide is hardcoded on the GPU" all over again?
 
Do you have direct experience of this? or is this "glide is hardcoded on the GPU" all over again?

No, nothing like that, there is enough info around to point to the same thing in this regards, how drivers need updating for every game, developer blogs, etc. i was just mistaken before with glide, it seemed perfectly plausible with how i interpreted the information and it was a while ago when it was something common.

Watch a few of the developer videos about Vulkan and Mantle etc. The newest Vulkan ones all talk about the same code working across equipment from different manufacturers in an identical way, that's how dumb API's work, api calls should be treate in an identicle way by the driver, just as with the X86 cpu analogy i gave. it is only because of the masses of abstraction and different implementations of this abstraction that means no two manufacturers drivers will treat the same directx code in an identicle way.
 
talking and doing are two entirely different things... you are just guessing based on what someone has "said"? and missing at least one (that I know of without spending much time thinking about it) whole level of complexity that would be needed to get this to work
 
Last edited:
well, as long as the hardware supports the feature set and the driver converts the calls from the api into calls that the hardware understands and back again flawlessly, then the calls coming from the driver for Company A should be identical to calls coming from the Driver of company B. The software developer should only see the API interface and any extraneous hardware capabilities, such as the Tier 1 - 3 stuff and if the hardware is a direct Renderer or Tile based, maybe some extra relative performance metrics as well.

But besides that, if the hardware is running to the spec, then a call sent to either card, should work in an identical way and the rest is all just performance of the hardware, such as with an X86 CPUs, The situation is identical in this regards compared to with Directx 11 and under because that is how a specification and api works. The only reason this situation is skewed and broken with directx 11 as an api, is because of differences in abstraction implementation which is independent of the api spec.

but as long as the cards are running to spec and the developer sees the cards based on a few extra capabilities and relative performance metrics, then they should be able to produce a multi gpu renderer that could use underlying hardware from any manufacturer, no differently to using cards from the same manufacturer, as long as the manufacturer has not artificially placed locks in the driver.
 
that isn't quite right
DX12 is low level, which means it lays directly in the developers hands what commands to use and when... it won't be down to nvidia, or AMD, or even microsoft to get it working, it will be squarely in the developers hands, which is why some people are being sceptical

Correct me if I am wrong but if it's hard coded into the API that means nothing has to be done right?

IE - it will instinctively know what to do with more than one GPU?
 
Correct me if I am wrong but if it's hard coded into the API that means nothing has to be done right?

IE - it will instinctively know what to do with more than one GPU?

Nothing to do with multi gpu is hard coded into the api, the api is made so that an application directly drives the gpu, the application needs to have support coded into its renderer to handle multiple GPU's which is in the hands of the application developer and not the driver developer, the driver is just a dumb abstraction that converts API calls into hardware calls.
 
well, as long as the hardware supports the feature set and the driver converts the calls from the api into calls that the hardware understands and back again flawlessly, then the calls coming from the driver for Company A should be identical to calls coming from the Driver of company B. The software developer should only see the API interface and any extraneous hardware capabilities, such as the Tier 1 - 3 stuff and if the hardware is a direct Renderer or Tile based, maybe some extra relative performance metrics as well.

But besides that, if the hardware is running to the spec, then a call sent to either card, should work in an identical way and the rest is all just performance of the hardware, such as with an X86 CPUs, The situation is identical in this regards compared to with Directx 11 and under because that is how a specification and api works. The only reason this situation is skewed and broken with directx 11 as an api, is because of differences in abstraction implementation which is independent of the api spec.

but as long as the cards are running to spec and the developer sees the cards based on a few extra capabilities and relative performance metrics, then they should be able to produce a multi gpu renderer that could use underlying hardware from any manufacturer, no differently to using cards from the same manufacturer, as long as the manufacturer has not artificially placed locks in the driver.

No, entirely wrong, with DX11 games ship so awfully broken that driver teams have to write huge swathes of code to intercept what the game is doing and actually make it work on their hardware, nothing to do with the drivers being out of spec, entirely to do with games developers writing poor code

you are right in as much as how that is how it SHOULD work, but in practice games developers get more wrong than they get right in terms of what code will actually work in the way they want it or expect it to

I can't see that drastically changing because of a new API, just by the sounds of things nvidia and AMD will have to work directly with the game devs and show them what they are doing wrong rather than catching it in the drivers

to do multi-non-like-card rendering, e.g. split frame, the game devs will have to check for each card, know what they are capable of in terms of throughput and code the game to slice the frame up in such a way that it efficiently utilises each GPU without causing a bottleneck... half the time they can't even get standard AFR working in time for launch

Edit: that is to say that publishers are often the biggest problem, pushing deadlines on developers who then have to rush something out that works, even if it doesnt work well
 
Last edited:
Back
Top Bottom