• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The state of Multi GPU (SLI/Crossfire) in DX12

I'd agree, CF and Sli is one big massive bag of balls, only thing I found CF fury X good for was making my system look good, quite the price to pay for aesthetics.

I'm also becoming more of this opinion as well. I fear this will be my last SLI implementation having had them since the 680 days. Will more thn likely go for a high spec pascal or polaris card next.
 
I'm also becoming more of this opinion as well. I fear this will be my last SLI implementation having had them since the 680 days. Will more thn likely go for a high spec pascal or polaris card next.

Lol, that's pretty much exactly my post from further up. AMD and Nvidia either should fully support it or bin it.
 
Lol, that's pretty much exactly my post from further up. AMD and Nvidia either should fully support it or bin it.

That is the whole problem, with DX12 it is up to the game devs to support multi GPU setups.

AMD and NVidia can put Multi GPU setups out there but it up to the game devs to make them run.:(
 
It's very hard to wrote multi-threaded code in gneral. It add a lot developer time. Even the simplest c9 cells can be broken, all tour favorite functions juzt no longer ger work safely (and worse still you get no warning if You dont study documentArion in excruciating detail.
For example I spent a couple of weeks getting a simple single threaded app to be thread safe. Simple things like rand () no longer work correctly. It took me a few days to get a random number generator.that was thread safe and worked approprivately with fixed seeds, and it still doesn't work as expected if you change the number.if threads.

Caught me out before - didn't expect rand to use a global variable (next) though makes sense if you sit down and think how the seed works.

Unfortunately that is one of the easier ones as well when it comes to multi threading.
 
DX12 was touted as the saviour of mGPU setups where it would just "work", turns out it's a right mess. Especially now that the developer needs to actually care about putting in more work.

Personally there isn't a single GPU that can run games at the settings and framerate I want at 1440p, never mind anything higher.


Either Polaris and Pascap are significantly better or folks like myself ( while in the minority ) will be still need to buy more than one card; and then hope the likes of EA or Ubisoft actually bother putting in the extra work. I doubt they will though, as pure multi-threaded code is a nightmare at best to implement. Never mind doing it for GPUs and also for two companies with every different architectures.
 
DX12 was touted as the saviour of mGPU setups where it would just "work", turns out it's a right mess. Especially now that the developer needs to actually care about putting in more work.

Personally there isn't a single GPU that can run games at the settings and framerate I want at 1440p, never mind anything higher.


Either Polaris and Pascap are significantly better or folks like myself ( while in the minority ) will be still need to buy more than one card; and then hope the likes of EA or Ubisoft actually bother putting in the extra work.

Hopefully DX11 will continue to be available for serious performance.

I can run the new ROTTR game maxed @2160p and get

DX12 on one card 17 fps

DX11 on 4 cards well over 60 fps

No contest is it.
 
Hopefully DX11 will continue to be available for serious performance.

I can run the new ROTTR game maxed @2160p and get

DX12 on one card 17 fps

DX11 on 4 cards well over 60 fps

No contest is it.

At the moment there isn't even a big visual difference to make me want to use the DX12 version at all either.

I really hope AMD, and NV figure something out otherwise things look like it's going to be a good while before it'll be worth jumping on.
 
IIRC, DX10 and 11 we're terrible on release were they not?

While im skeptical of the marketing gains we're told, i'm sure in time it will improved.

I thought the big gains were in CPU usage too, so games like tomb raider and hitman which seem more GPU focused games are not going to see the major gains.

that said, my big hope here was of course multi GPU usage which as everyone says has been crap and is only getting worse. I hope they can turn this around.

Will be a while before we see a proper game made with DX12 in mind though, thus far its being tacked on to games that have already spent years in development.
 
I missed this Anandtech article last week discussing the state of DX12 with Microsoft and Oxide Games (Ashes of the Singularity)

http://www.anandtech.com/show/10136/discussing-the-state-of-directx-12-with-microsoft-oxide-games

Some interesting bits from the interview:

Baker said that Oxide’s next game may go DX12-exclusive, as adoption is strong and doing so would give Oxide’s developers the freedom to implement some new rendering strategies that they can’t properly implement in a game that needs to support both DX11 and DX12.

So it sounds like the real gains will comes in a few years once we are out of the transitional period whereby games support both DX11 and DX12 due to gamers sticking with Windows 7/8.1

The low-level nature of DX12 means that more control over optimizations will be in the hands of developers – and they will need to rise up to the challenge for best results – as opposed to video card drivers.

memory management under DirectX 12 is still a challenge, albeit one that’s evolving. Under DirectX 11 memory management was typically a driver problem, and the drivers usually got it right – though as Baker noted in our conversation, even now they do sometimes fail when dealing with issues such as memory fragmentation. DX12 on the other hand gives all of this control over to developers, which brings both great power and great responsibility. PC developers need to be concerned with issues such as memory overcommitment, and how to gracefully handle it. Mantle users will be familiar with this matter: most Mantle games would slow to a crawl if memory was overcommitted, which although better than crashing, is not necessarily the most graceful way to handle the situation.

Lets hope that developers can rise to that challenge, as it sounds like the gains we used to get with new drivers may become a thing of the past.

While every game will be unique, in the case of Ashes Oxide has already run into situations where they are both CPU memory bandwidth and CPU core count limited. Much of this has to do with the game’s expensive AI and simulation code paths, but as Baker was all too proud to recount, Ashes’ QA team had to go track down a more powerful system for multi-GPU testing, as their quad core systems were still CPU limited. DX12’s low-level nature is going to reduce CPU usage in some ways, but with its multithreading capabilities it’s going to scale it back up again in other ways that may very well push the limits of conventional quad core CPUs in other games as well.

Still sounds like those 6/8 core systems might come in to their own in the future
 
its a good article that highlights a lot of issues with DX12 that I've talked about before - giving more control to developers is liekly to generate lots of issues for most developers. We are going to get wildly different game performance and a host of new game bugs. Gears of War scenarios will become even more common unfortunately, performance being slower under DX12 than DX11 will be nothing extraordinary either as developers will be more responsible for architectural-specific optimizations. Cards with the biggest market share will likely get the most developer focus of course.

The best developers will make big improvements, the average developer will liekly run in to lots of issues. Gears of War is exactly the kind of thing that will become very common, e.g. some specific architectures have terrible performance.
 
Nvidia has the biggest PC market share, but the console's both run AMD GPUs.

Really DX12 looks to favour AMD at the moment as most games are developed for the PS4 and ported over.
 
Nvidia has the biggest PC market share, but the console's both run AMD GPUs.

Really DX12 looks to favour AMD at the moment as most games are developed for the PS4 and ported over.

Well it is something Nvidia will really need to get a handle on, even if it means employing a group of programers to go to individual studios and help then optimize for their architectures. Otherwise the market domination will start to dwindle.
 
Well it is something Nvidia will really need to get a handle on, even if it means employing a group of programers to go to individual studios and help then optimize for their architectures. Otherwise the market domination will start to dwindle.

Or just pay the developers to hamstring their engines/over tessalate/break AMDs features ;)
 
The best developers will make big improvements, the average developer will liekly run in to lots of issues. Gears of War is exactly the kind of thing that will become very common, e.g. some specific architectures have terrible performance.

From a cursory glance, though I lack a lot of experience with either, looked like Vulkan was the better of both worlds - looks like with DX12 you have to deal with some really nasty memory management with little option - infact the more I look at it the more I wonder if its not a massive misstep and unless you are a John Carmack or Tim Sweeney more of a barrier than an enabler.
 
You would think the game devs would put multi gpu support in just so they can get the free press from the review sites that publish SLI/Crossfire reviews, that is a lot of free advertisement even if it's only for 1% of customers, the game gets massive exposure in the enthusiast community. Really worth doing for AAA titles. Look at all the press DX12 games are getting, seems worth the development cost.
 
You would think the game devs would put multi gpu support in just so they can get the free press from the review sites that publish SLI/Crossfire reviews, that is a lot of free advertisement even if it's only for 1% of customers, the game gets massive exposure in the enthusiast community. Really worth doing for AAA titles. Look at all the press DX12 games are getting, seems worth the development cost.

Problem is as with this quote:

freedom to implement some new rendering strategies that they can’t properly implement in a game that needs to support both DX11 and DX12.

Utilising multiple GPUs at game engine level in DX12 largely requires compromising on the ability to support DX11 - it isn't just a case of switching on AFR but working the ability to farm off sections of your entire compute and rendering pipeline as effectively as multi GPU allows for.
 
Back
Top Bottom