• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Multi-GPU Vendor Support Confirmed

Associate
Joined
1 Mar 2014
Posts
2,419
Not sure if this support agent is just clueless but ...


26V0jQh.jpg

Source : http://www.kitguru.net/components/g...hv-amd-nvidia-multi-gpu-tech-support-by-dx12/
 
My understanding from the previous announcements/threads, DX12 will support multi-GPU from different vendors, in much the same way that Vista allowed AMD, Nvidia and Intel GPUs to be used together in a system. What I think is going to be the sticking point is getting AMD, Nvidia and Intel to play nicely together and write their drivers so they actually do work together. Also, DX12 and Mantle put the responsibility for the graphics performance a lot more in the hands of game developers rather than mostly in the hands of DX/graphics drivers, so we are then also relying on the game developer writing code that will work on multi-brand GPUs.

In short, from Microsoft's point of view, DX12 will allow multi-GPU from different vendors to work together, but I really don't expect to ever see it actually working outside of a few impressive tech demos. If a game actually supports it, I will be very, very surprised (and pleased).
 
True, basically MS have thrown the switch to on for Multi GPU support and then stood back saying right I have turned on the light its now upto you three to play nicely together but if it all ends in tears dont come running back to us. :)
 
Can I ask why anyone would want to do this? I'd much rather reduce the risk of compatibility issues than introduce another variable.
 
Can I ask why anyone would want to do this? I'd much rather reduce the risk of compatibility issues than introduce another variable.

I have to agree. Whilst it would be cool to run nVidia and AMD alongside, I can imagine many threads with accusations of nVidia purposefully crippling performance on such and such a game etc.

Angel looks like she knows.

lol :D
 
With mantle, one of the features is that when running an apu + a discreet gpu, some of the workload can be assigned to the apu's igp, thus adding to the preformance. It hasn't been used yet, but as DX12 is heavily based upon it, then we may see it in the future.

the Intel igp could probably be used in the same way.
 
With mantle, one of the features is that when running an apu + a discreet gpu, some of the workload can be assigned to the apu's igp, thus adding to the preformance. It hasn't been used yet, but as DX12 is heavily based upon it, then we may see it in the future.

the Intel igp could probably be used in the same way.

So maybe a benefit for the laptop market but not really worthwhile for discreet GPU desktop users.

I sure as hell wouldn't want to be running two different vendor cards in my rig. Like I said before I don't see the point as more than likely it will introduce compatibility issues.
 
So maybe a benefit for the laptop market but not really worthwhile for discreet GPU desktop users.

I sure as hell wouldn't want to be running two different vendor cards in my rig. Like I said before I don't see the point as more than likely it will introduce compatibility issues.

Why not offload less demanding tasks to the igpu? It's sitting there doing nothing when there's a discrete card in play. Seems worthwhile to me.
 
I suspect DX12 will have some degree of abstraction where things are less fixed to any one rendering device between after the point you call begin frame and until you end frame so some stuff could be handed off to different rendering devices and the final results composited into the final framebuffer.
 
Given that you can't freely use GPUs from even a single vendor in multi-GPU configurations, it seems like a stretch to assume different vendors GPUs will work together as mGPU setups.
If you cant use a 980 and a 970 in SLI, or a radeon 7770 and 290x together, why would anyone expect to be able to use a 980 and 290 together in any decent way?
 
Just been thinking if you can use different makes of card, is there a limit on the number of GPUs. For example could someone use 4 x 295X2s for a total of 8 GPUs.:eek::D
 
This is completely feasable, as long as both cards fully support the spec for low abstraction, the only difference visible to the programmer should be performance based.

But something similar to what Lucid tried doing is far more feasible with the application directly driving the gpu's in the system, you could get less powerful cards helping with smaller sections of a scene if the application sub divides the rendering of geometry and lighting etc, instead of just AFR or SFR rendering whole portions or the entire scene.

"Sub Scene Rendering" is very interesting.
 
Given that you can't freely use GPUs from even a single vendor in multi-GPU configurations, it seems like a stretch to assume different vendors GPUs will work together as mGPU setups.
If you cant use a 980 and a 970 in SLI, or a radeon 7770 and 290x together, why would anyone expect to be able to use a 980 and 290 together in any decent way?

Theoretically if they both support a standard feature in a standard way there is no reason why an abstraction layer couldn't be utilised to tie their capabilities into one pool.
 
Am I the only one thinking that the AMD that the support is referring to is "CPU" rather than "graphic card"...so basically she's saying multi Nvidia GPUs configuration on AMD CPU platform? :p

Have to say the original question is a bit misleading to be frank lol
 
Given that you can't freely use GPUs from even a single vendor in multi-GPU configurations, it seems like a stretch to assume different vendors GPUs will work together as mGPU setups.
If you cant use a 980 and a 970 in SLI, or a radeon 7770 and 290x together, why would anyone expect to be able to use a 980 and 290 together in any decent way?

The reason for the current need for matched cards, is because sli and crossfire work over the top of DirectX, there is no fine grade control of the rendering between cards and the easiest method when working with Dx like this, is to simply alternate rendering between devices.
 
This is completely feasable, as long as both cards fully support the spec for low abstraction, the only difference visible to the programmer should be performance based.

Ah yes, good point (you and Rroff). I hadn't thought it through enough.
But if it is possible, I can't see nVidia going for it somehow.
 
Unless nvidia caused their cards to completely disable themselves, if the driver detected an AMD driver/dGPU in the system, which would be a down right disgusting move, even worse than what they did with disabling gpu physx if an amd card is in the system.

With the application running everything and the driver just being a dumb api translation layer, there should be nothing that nvidia can do to stop this if a dev team wanted to try it, unless they break the api spec in their own implementation and enforced some kind of dirty api calls that only work with their cards, but then it would not be fully DirectX compliant and invalidate the point of having a specification. But even this could be worked around if the application treated AMD and nvidia cards in different ways.
 
Back
Top Bottom