• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The state of Multi GPU (SLI/Crossfire) in DX12

You would think the game devs would put multi gpu support in just so they can get the free press from the review sites that publish SLI/Crossfire reviews, that is a lot of free advertisement even if it's only for 1% of customers, the game gets massive exposure in the enthusiast community. Really worth doing for AAA titles. Look at all the press DX12 games are getting, seems worth the development cost.

Yet it hasn't been done, what does that tell you?
 
Summary Tony?

Talking about how the new duo card is targeted at developers and how they are trying to get it along with cryengine based stuff into universities, etc. so that developers will be used to building products that are utilising multi GPU capabilities from grass root level up - rather than just dumping the game through generic AFR/SFR.
 
From a cursory glance, though I lack a lot of experience with either, looked like Vulkan was the better of both worlds - looks like with DX12 you have to deal with some really nasty memory management with little option - infact the more I look at it the more I wonder if its not a massive misstep and unless you are a John Carmack or Tim Sweeney more of a barrier than an enabler.

Its my opinion DX12 is largely a step backwards for most developers except for the Carmack's of the world. I'm more of a OpenGL developer and I know the community there is hesitant enough about Vulkan and are talking about Vulkan to OpenGL wrappers or new higher API wrappers. I can imagine something similar happens with DX12, a mid-level API wrapper appears so developers can forget the excruciating memory management but make use of some of the nicer things like properly multi-threaded command processors.


At the end of the day you can have the complexity in the DX API and driver stack, or add that complexity to the game engine giving the developer full control but more responsibility. The latter sounds nice but developers really don't want to worry to much about differences between GCN 1.1 and 1.2 or or how this particular fragment shader works on Maxwell vs Kepler. Letting the IHV's worry about architecture specific optimization is more intuitive.




As I said before, when Gears of war worked really well on some GCN architectures and terribly on others, that is just a classic case of developers not optimization and bug fixing for, different architectures. This wasn't some Nvidia over tessellating bribing developers conspiracy, just a black and white case of something that clearly didn't work well on GCN 1.3 which let the 390X beat the FuryX. Under DX11 that is far less liekly to happen because the driver and API stack together would largely remove the developer's need to optimize for different architectures.
 
Summary Tony?

I think you can easily read between lines that AMD is planning to release multiple multigpu systems, even from smaller dies. In order to get multigpu really adopted by developers, is to make it mainstream.

Let's be realistic, if this new process tech gives us 2.5x performance on biggest dies. How far is it gonna carry us. 60 fps 4k; 5k? Yeah sure for current genre of games, but future games will have even more demanding graphics. So I think aiming for multigpu mainstreaming is a nice goal, and just like it happened on cpu's, we need it happening on gpu's aswell. Performance requirements are raising so fast compared how "slowly" new process nodes evolve.
 
Let's face it, I've been gaming for about 15 years now and Up to this date there has never been ONE card that can push the latest plushest game at the best resolutions, at the best settings. Ever. With the underwhelming reveals that are coming out re Polaris and pascal, I don't think that's going to change any time soon.

Ergo, multi GPU is here to stay, it just needs to be done better. Come on geeks and boffins, sort it the **** out.
 
Last edited:
I just noticed this
rCZUXrq.jpg.png


We're supposed to turn SLI off!
 
Back
Top Bottom