• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DX12 Multiadapter Feature

Soldato
Joined
31 Oct 2002
Posts
10,355
http://blogs.msdn.com/b/directx/archive/2015/05/01/directx-12-multiadapter-lighting-up-dormant-silicon-and-making-it-work-for-you.aspx

Are you one of the millions of PC users with a laptop or a desktop system with an integrated GPU as well as a discrete GPU? Before Windows 10 and DirectX 12, all the performance potential from the second GPU goes unused. With DirectX 12 and Windows 10, application developers can use every GPU on the system simultaneously!

Are you an elite power gamer with multiple graphics cards? Well, by giving the applications direct control over all of the hardware on the system, applications can optimize more effectively for the ultra-high end, squeezing even more performance out of the most powerful systems available today and in the future.

We’re calling this DX12 feature “Multiadapter”, and applications can use it to unlock all that silicon that would otherwise be sitting dormant doing nothing.

At //build 2015 we both announced the Multiadapter feature and showed several examples of how it can be used.

Square

aI86CRk.jpg


This sounds awesome - all these mainstream Intel CPU's that have an increasing larger amount of the die devoted to 'useless' IGPU now finally has a use for gamers!

I'm quite keen to see how Broadwell for desktop will perform with it's Iris Pro 128MB eDRAM, and how much of an improvement Skylake's IGPU will bring, when coupled with a decent dedicated GPU.

Interesting times ahead :)
 
Last edited:
This is a directx12 feature, built into the API. I'm reserving judgement until we see a thorough review/benchmark of it etc, though as it's from Microsoft and build into DX12.

The mainstream I5's, I7's etc with the large IGPU built in is literally sat there doing nothing. If it can be used automatically by DX12 to improve performance, then it's a win win scenario.

Intel have such a huge market share, I can understand why Microsoft decided to build this into DX12 - a very large % of the market will benefit from it.
 
I thought LucidLogix was the ability to hook the monitor up to either iGPU or dedicate GPU's output and have the other GPU's output somehow routed through it. The idea being power saving more than performance. This feature is different in that it's actually a native part of DirectX so rather than being some 3rd party bodge that never took off (and wasn't for performance reasons anyway), this should actually be utilised to some effect.

How much of an effect it'll really make in the real world remains to be seen but it's good that there could be a benefit from the iGPU for something other than encoding.
 
Sure it'll add a little bit of performance, it'll also add a frame of latency (which they conveniently didn't mention) which may matter to some.
 
This is a directx12 feature, built into the API. I'm reserving judgement until we see a thorough review/benchmark of it etc, though as it's from Microsoft and build into DX12.

The mainstream I5's, I7's etc with the large IGPU built in is literally sat there doing nothing. If it can be used automatically by DX12 to improve performance, then it's a win win scenario.

Intel have such a huge market share, I can understand why Microsoft decided to build this into DX12 - a very large % of the market will benefit from it.

We aren't jumping on you Dave, thanks for the post.

This was partially tried back in the 3770K era but had various issues if I recall correctly.

It won't give you bigger gains than what Microsoft will be doing with the CPU side of DX12, as the people using this will be on a 1080p monitors or lower (laptops being mostly in the 1366x768 range) which is even more CPU bound.

To add to this all, SLI/CFX has been around for what 10 years now ? Doubt something this new is going to be perfected that easily.
 
Last edited:
A gain of 3.8 Frames for what will no doubt have some downsides of Multi-GPU solutions.

Will be interesting to see how it works when complete however.

I bet it will also make some peoples overclocks unstable when they have to dissipate that iGPU heat they never used prior :D
 
I thought LucidLogix was the ability to hook the monitor up to either iGPU or dedicate GPU's output and have the other GPU's output somehow routed through it. The idea being power saving more than performance. This feature is different in that it's actually a native part of DirectX so rather than being some 3rd party bodge that never took off (and wasn't for performance reasons anyway), this should actually be utilised to some effect.

How much of an effect it'll really make in the real world remains to be seen but it's good that there could be a benefit from the iGPU for something other than encoding.

"If you pair Sandy Bridge with a discrete GPU on the desktop, you lose the ability to use one of the CPU's biggest features.

Intel will address the overclocking/processor graphics exclusion through the upcoming Z68 chipset, however that doesn't solve the problem of not being able to use Quick Sync if you have a discrete GPU installed. Intel originally suggested using multiple monitors with one hooked up to the motherboard's video out and the other hooked up to your discrete GPU to maintain Quick Sync support, however that's hardly elegant. At CES this year we were shown a better alternative from none other than Lucid.

Remember the basis of how Hydra worked: intercept API calls and dynamically load balance them across multiple GPUs. In the case of Sandy Bridge, we don't need load balancing - we just need to send games to a discrete GPU and video decoding/encoding to the processor's GPU. This is what Lucid's latest technology Virtu, does."

http://www.anandtech.com/show/4199/...egrateddiscrete-gpu-on-sandy-bridge-platforms
 
Last edited:
I'm sure this will not be anything to scream about. Many enthusiasts disable their IGPU as some believe it increases stability when it comes to TDP and power usage for their overclocks. also it reduces heat having it disabled. having it actually in use in games will increase the CPU's package overall heat output by some what and could potentially cause instability with those high overclocks people get or even mild ones with bad chips.

Will this feature be better than your stable overclock on your cpu? With best case being about 5FPS by the looks of it with possibility of frame latency as its passing frames to the IGPU to be rendered.

Don't really have high hopes for this. Would be nice if it was a significant performance boost but doesn't look that way.

would be better if they could figure out how to use the IGPU in other ways for compute tasks and such.
 
Whilst I am for any technology such as this, I think in reality the best practical use for this (at this moment in time) is crunching for things like boinc and digital currency.
 
This is quite a bit different in approach to hydra (which had to hack into the API and guess at what it could pull apart to get an increase in performance) - I'm assuming its entirely abstracted at API level and to really see gains from it will require developers to know what they can hand off to less high performance sub-systems and what they can't to get the best performance.

It shouldn't have most of the issues of multi GPU as it will mostly involve all the rendering capable devices working together on one frame which gives the best compatibility (and smoothness) but not the most efficient possible performance.
 
Last edited:
This also confirms that AMD and nVidia card will work perfectly fine together, when using directx 12 in a multi gpu setup, considering this nVidia + intel gpu setup work fine, as it appears.
 
I really do hope this catches on. It could make apus a lot more viable, adding a boost to to the discrete gpu.

You can crossfire AMDs APUs with specific cards already.

I can't quite get my head around how this would work - would you have to sync 3d settings between the cards? What if your iGPU supported less DX12 features than the dedicated GPU? What about latency issues?

I guess if DX12 is stripped right back to the metal, which it isn't, you'd just use the compute performance of each card.
 
Last edited:
It's worth noting that at this stage, Microsoft only give one example of how the IGPU will be utilized - post processing.

Quoting from the article linked in OP:

Virtually every game out there makes use of postprocessing to make your favorite games visually impressive; but that postprocessing work doesn’t come free. By offloading some of the postprocessing work to a second GPU, the first GPU is freed up to start on the next frame before it would have otherwise been able to improving your overall framerate

2hiXHPt.png


The image above shows how this post processing work on the IGPU would work, when coupled with two NVIDIA GPU's in SLI. This shouldn't affect the direct pipeline of the GPU - so we shouldn't see any frame pacing issues or stutering issues with this implementation. It also shows that this feature is being developed with multiple dedicated GPU's in mind, as well as single cards, so it seems a comprehensive solution.

As the article mentions, almost every game utilizes post-processing, so this really should be a big win win for all those who have an IGPU doing nothing.

It should also be noted that Microsoft are no doubt working with NVIDIA to implement this feature, so it goes without saying that NVIDIA's drivers will be top notch and will take advantage of this feature out of the box, when released.

*Conspiracy hat on* - I wonder if Intel pressured Microsoft into adding this feature into DX12 - it will only increase the already massive gap between AMD and Intel CPU's - since the high end FX CPU's from AMD dont have an IGPU at all, including the Zen CPU's due out next year. Hmm, being I'm being paranoid, though it seems quite an ingenious way to further handicap the competition to me :P
 
Last edited:
This is great but the latency between GPU and CPU is really large so making this feature not suck will be hard assuming they just do a "standard" crossfire thing. Most likely they will just offload post-FX which would be a lot easier.

I can't quite get my head around how this would work - would you have to sync 3d settings between the cards? What if your iGPU supported less DX12 features than the dedicated GPU? What about latency issues?

Pretty sure in this case it would just do what most multi-technology solutions would. Lowest common denominator.
 
Back
Top Bottom