- Joined
- 30 Jul 2013
- Posts
- 29,681
HITMAN runs better in DX12 than DX11 SLI, despite the frame rates being objectively higher overall with dual GPU's
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I'd agree, CF and Sli is one big massive bag of balls, only thing I found CF fury X good for was making my system look good, quite the price to pay for aesthetics.
I'm also becoming more of this opinion as well. I fear this will be my last SLI implementation having had them since the 680 days. Will more thn likely go for a high spec pascal or polaris card next.
Lol, that's pretty much exactly my post from further up. AMD and Nvidia either should fully support it or bin it.
It's very hard to wrote multi-threaded code in gneral. It add a lot developer time. Even the simplest c9 cells can be broken, all tour favorite functions juzt no longer ger work safely (and worse still you get no warning if You dont study documentArion in excruciating detail.
For example I spent a couple of weeks getting a simple single threaded app to be thread safe. Simple things like rand () no longer work correctly. It took me a few days to get a random number generator.that was thread safe and worked approprivately with fixed seeds, and it still doesn't work as expected if you change the number.if threads.
DX12 was touted as the saviour of mGPU setups where it would just "work", turns out it's a right mess. Especially now that the developer needs to actually care about putting in more work.
Personally there isn't a single GPU that can run games at the settings and framerate I want at 1440p, never mind anything higher.
Either Polaris and Pascap are significantly better or folks like myself ( while in the minority ) will be still need to buy more than one card; and then hope the likes of EA or Ubisoft actually bother putting in the extra work.
Hopefully DX11 will continue to be available for serious performance.
I can run the new ROTTR game maxed @2160p and get
DX12 on one card 17 fps
DX11 on 4 cards well over 60 fps
No contest is it.
Thought this might interest some of you in regards to Multi GPU:
http://venturebeat.com/2016/03/13/s...you-mix-amd-and-nvidia-video-cards-in-one-pc/
Baker said that Oxide’s next game may go DX12-exclusive, as adoption is strong and doing so would give Oxide’s developers the freedom to implement some new rendering strategies that they can’t properly implement in a game that needs to support both DX11 and DX12.
The low-level nature of DX12 means that more control over optimizations will be in the hands of developers – and they will need to rise up to the challenge for best results – as opposed to video card drivers.
memory management under DirectX 12 is still a challenge, albeit one that’s evolving. Under DirectX 11 memory management was typically a driver problem, and the drivers usually got it right – though as Baker noted in our conversation, even now they do sometimes fail when dealing with issues such as memory fragmentation. DX12 on the other hand gives all of this control over to developers, which brings both great power and great responsibility. PC developers need to be concerned with issues such as memory overcommitment, and how to gracefully handle it. Mantle users will be familiar with this matter: most Mantle games would slow to a crawl if memory was overcommitted, which although better than crashing, is not necessarily the most graceful way to handle the situation.
While every game will be unique, in the case of Ashes Oxide has already run into situations where they are both CPU memory bandwidth and CPU core count limited. Much of this has to do with the game’s expensive AI and simulation code paths, but as Baker was all too proud to recount, Ashes’ QA team had to go track down a more powerful system for multi-GPU testing, as their quad core systems were still CPU limited. DX12’s low-level nature is going to reduce CPU usage in some ways, but with its multithreading capabilities it’s going to scale it back up again in other ways that may very well push the limits of conventional quad core CPUs in other games as well.
Nvidia has the biggest PC market share, but the console's both run AMD GPUs.
Really DX12 looks to favour AMD at the moment as most games are developed for the PS4 and ported over.
Well it is something Nvidia will really need to get a handle on, even if it means employing a group of programers to go to individual studios and help then optimize for their architectures. Otherwise the market domination will start to dwindle.
The best developers will make big improvements, the average developer will liekly run in to lots of issues. Gears of War is exactly the kind of thing that will become very common, e.g. some specific architectures have terrible performance.
Or just pay the developers to hamstring their engines/over tessalate/break AMDs features![]()
You would think the game devs would put multi gpu support in just so they can get the free press from the review sites that publish SLI/Crossfire reviews, that is a lot of free advertisement even if it's only for 1% of customers, the game gets massive exposure in the enthusiast community. Really worth doing for AAA titles. Look at all the press DX12 games are getting, seems worth the development cost.
freedom to implement some new rendering strategies that they can’t properly implement in a game that needs to support both DX11 and DX12.