• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DX12 Multiadapter Feature

Anyone else see that this means that you could use an AMD and an nvidia card together? Or the fact that it means that we may no longer have to use the vram from a single card when using xfire/sli?

http://www.eteknix.com/microsoft-talks-using-amd-nvidia-gpus-simultaneously-directx-12

The thing about using all of the ram on every card you have, is purely dependent on the implementation of the multi gpu setup in the software. If they are just using AFR then the situation will remain the same as today, with the memory being mirrored.

otherwise the memory will be used in a way that is suitable with whatever method of multi gpu rendering they use.
 
Quoting from the article linked in OP:



2hiXHPt.png

That's horrible.

5 more FPS, but nearly double the latency - the time from the start of drawing frame 1, until frame 1 is completed on the Intel IGP.

I'd rather not bother!
 
It would make more sense if intel made mainstream i7 CPUs without the integrated graphics and sold them cheaper.

Then whatever money the user saved on the CPU could then be put to getting a better GPU.
 
It would make more sense if intel made mainstream i7 CPUs without the integrated graphics and sold them cheaper.

Then whatever money the user saved on the CPU could then be put to getting a better GPU.

They kinda do with those strange consumer Xeon's. I for one love the idea of both. Offloading onto the integrated GPU would be amazing if done correctly (mostly for GPGPU, particle systems, physics etc), but at the same time cheap CPU's with the same performance means you can buy a better GPU!
 
So i can have Nvidia and AMD in the same PC?

evil_laugh_jpg_882e93.jpg

As much as I love this idea too, we both know nvidia (and most likely AMD too) absolutely hate this idea. Remember how people used to run AMD cards and nvidia cards just to offload physx onto? nvidia soon put a stop to that, hacked drivers that allow it hardly work.
 
As much as I love this idea too, we both know nvidia (and most likely AMD too) absolutely hate this idea. Remember how people used to run AMD cards and nvidia cards just to offload physx onto? nvidia soon put a stop to that, hacked drivers that allow it hardly work.

Would be interesting this time around. Driver updates are a thing of the past with the new API's, so unless they add something to prevent this early on, people could just use old drivers.

Also, the drivers are a lot simpler, if they are also open source, we could see people modify the drivers. Valve has made the first Intel driver for Vulkan, so I guess anyone with the SDK and enough knowledge could do the same.
 
instead of it rendering frames cant the IGPU do something else in terms of improving game's performance such as shadow rendering or light rendering and such the same way which microsoft was going to use their servers or cloud computing for lighting and such for the xbox one, however that would require low latency and high bandwidth on the users internet side of things.
 
As much as I love this idea too, we both know nvidia (and most likely AMD too) absolutely hate this idea. Remember how people used to run AMD cards and nvidia cards just to offload physx onto? nvidia soon put a stop to that, hacked drivers that allow it hardly work.

Nvidia are probably not gonna like it the most. They made a driver already that locks you out of LameWorks if you have an AMD card installed.
 
Would be interesting this time around. Driver updates are a thing of the past with the new API's, so unless they add something to prevent this early on, people could just use old drivers.

Also, the drivers are a lot simpler, if they are also open source, we could see people modify the drivers. Valve has made the first Intel driver for Vulkan, so I guess anyone with the SDK and enough knowledge could do the same.

Those drivers will still support DX11, so no, not quite yet over and done with. It'll be years before the majority of games will be on the modern API's.
 
The thing is, that with all these things DirectX 12, because the API allows you to get closer to the hardware, features like this will probably need to be coded for. Developers have enough trouble getting two graphics cards from the same company working together properly let alone GPU's from different manufacturers and that is without any thought of NVidia and AMD not wanting to play ball with their drivers.

Bottom line, its a nice idea but I doubt it will see much use except for a few niche titles.
 
^^ Developers rarely program for multi GPU specifically - most standard API features, outside of post processing effects and/or anything that needs access to previous frame buffers just works to varying degrees of efficiency at most when it comes to multi GPU support developers will cut or use an alternative feature or a work around for features that are preventing multi GPU from working/working properly.

When a game doesn't work with SLI/crossfire its generally not because they developer has been lazy and not programmed multi GPU support.

What I see this feature being used for is things like threaded hand off of GUI/HUD building so that lower powered GPUs can spend all their time per frame building that element and saving a few percent on the main GPU.
 
Last edited:
Well if it's a "feature" of D3D12 I guess that means there are functions you can feed with your post-effects or at least some level of abstraction.

It will require a DX12 IGP of course.

Rroff what's going to happen with middleware like speedtree etc? Will it be able to work with D3D12?
 
Last edited:
Yeah my first comment was about how multi GPU support is now - as things are there is very little a game developer can do to purposefully sit down and "program" for multi GPU support, there are things they can do to maximise compatibility - usually by avoiding certain features, etc. the rest comes down to the GPU vendor. DX12 to a degree will change that though not entirely.
 
Back
Top Bottom