• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is DirectX 12 Worth the Trouble?

who is buying two different brands of gpu to have in the same system no one.

I did:p

Bought a 670 to try out 290X/670 hybrid 'best of both worlds' system to run Mantle via 290X and gpu PhysX via the 670, Mantle worked, gpu PhysX-nope-driver disabled, wonder if anything changed regarding Nvidia enforcing disabled gpu PhysX with an AMD discrete present in the system as Nv now discloses gpu PhysX is disabled in the driver notes, they never back then.


I know, DX12 is so bad that global warming is getting worse, the rich are getting richer, Trump got elected and child slavery still exists. Is there anything else that has nothing to do with DX12 that you want to blame on DX12?

:D
 
Last edited:
Explicit multi adaptor does work and doesn't need to be different cards and/or if programmed for properly can work seamlessly with different configs - potentially its very useful if a game can farm out its work load discretely - even just splitting up rendering of the game scene and UI to different GPUs could provide a fairly significant performance boost in some games. Combing VRAM in any useful way for realtime sensitive applications however as you said is a long way off and requires a completely new generation of hardware.

Not 100% sure what you're getting at with the first part as it's not disagreeing with me or anything? e-mgpu(I guess?) for different brands works, but isn't very useful, e-mgpu for same brand cards works fantastically, Sniper elite 4 has literally 100% scaling in xfire using it.

I was responding to Kaap's assertion that loads of the DX12 stuff promised doesn't work, so I was just highlighting that the e-mgpu with different brand cards(the only thing he could think of to mention when I called him on the 'loads' part of his statement) does in fact work, it's just not practical.

Honestly the stacked ram just won't work on really any gen hardware. If your normal access is hell, 512GB/s maybe if we think upper end Vega, there is no way to sensibly utilise the memory on a per frame basis from another card sitting across a 16GB/s bus, even with pci-e 4.0 and 5.0, it's never going to let you store twice as much memory and run way higher settings. Even if it is possible, which it likely is, it would also likely reduce performance so badly that no one would ever use it.

Things might change in terms of, officially using joined ram pools but really that hardware is two interposer based chips on one PCB, with some kind of extremely high speed but lowish power interconnect using a interposer bridge between two chips. But in effect, you're talking about one graphics card. For it to be effective you need to find a way to have communication speeds several times higher and the only way this will be implemented is on a silicon level imo.
 
I disagree a bit on the usefulness - if utilised properly rather than just thrown indiscriminately in a load balancing attempt like AFR - then DX12 explicit multi adaptor can be used very effectively with mixed cards to farm out the workload where you have carnal knowledge of how your game works i.e. if you know you can render all your long distance LOD parts of the scene without having to touch data on any other GPU and then compose it back into the scene that can give a good speed up with mixed GPUs - but its very game specific.
 
Honestly I'm still not sure what you're getting at, are you talking about balancing load like with a Fury X and a 290x, so when you buy a new card you can use the old one? Or are you talking about having an Nvidia and an AMD card in the same system? I've already said it works well, better than you could really hope for. The reason it's useless is the number of people who have an AMD and Nvidia card in the same system for the purpose of gaming is likely 20 people, all of them working at Nvidia, AMD or a dev playing around with it.

It's useless because no one is around to take advantage of it, not because it doesn't work. Explicit mgpu under DX12 is great, giving the devs control rather than the drivers and giving them the ability to use any number of cards resources as they see fit, not as a DX11 driver sees fit. But mixing and matching AMD and Nvidia cards is near worthless because even if support was added to ever game, 99.99999% of all gamers wouldn't run a system like that.
 
A yes you did make up image improvement, NO API IMPROVES THE IMAGE DIRECTLY. None, ever, it helps brings up performance so you can if you want to, add higher image quality settings, simple as that.

Second, multi gpu with different brands of cards DOES work and I believe is used in AoS. But it's also pointless, who is buying two different brands of gpu to have in the same system no one. But where stacked memory literally doesn't use because it's a pipe dream, mgpu with different cards has been proven to work, it's just equally as pointless because no one will ever use it.

As for the rest, I'm not sure what the hell you're attempting to bring into the discussion here. 4 Maxwell's are faster, because Nvidia stopped supporting 4 way Pascal.... which is because DX12 sucks????? Really? Nvidia realised like 8 people in the world bothered with more than 2 Nvidia cards in a system, the cost of supporting it and trying to get devs to get scaling beyond 2 cards simply wasn't offering them any payback so they stopped. It has precisely nothing at all to do with DX12.

I know, DX12 is so bad that global warming is getting worse, the rich are getting richer, Trump got elected and child slavery still exists. Is there anything else that has nothing to do with DX12 that you want to blame on DX12?

:D

I have got plenty of AMD cards but if I wanted to run DX12 it would have to be on NVidia hardware.

It is not my fault that the best AMD can do is 2 year old cards, that is down to AMD.

Do you even look at articles from benchmark sites... They show dx 12 for nvidia either gains nothing or loses fps... Meanwhile DX 12 brings AMD cards up to nvidia's newer GPUs performance levels or has them over taking nvidia GPUs..

Unless have nvidia sent you special GPUs that happen to have dx 12 enabled?

Yup a 2+ year old amd gpu matching nvidia's < 1 year old 1070 is the best they can do, makes you wonder what Vega will be like... Let's not even look at the older AMD GPUs compared to the equivalent nvidia cards :o

Nvidia DX12, cutting their nose off to spite AMD's face even at the detriment of their own users...:p

Pretty much :p :D
 
The thought of explicit GPU support between vendors makes me sigh, very deeply. For most of these studios resources are stretched thin enough without worrying about those difficulties, and as above nobody with any sense would run such a configuration.

I have to say though after giving multiple titles the benefit of the doubt; Nvidia's DX12 support is seriously hit and miss. There are exceptions to the rule, but generally speaking frame times are consistently better for me sticking to DX11. So if setting aside the potential for progress, whether that be more effective buffer management or explicit GPU support, currently i'm not overly impressed. I've not tried on AMD side as I didn't keep my Fury X long enough, but the data shown from all the usual sources shows things aren't quite as shaky.

I think a lot of that is to do with some form of priority. NVIDIA can be aggressive in certain areas, but when things take a backseat, they really take a backseat.
 
Last edited:
Honestly I'm still not sure what you're getting at, are you talking about balancing load like with a Fury X and a 290x, so when you buy a new card you can use the old one? Or are you talking about having an Nvidia and an AMD card in the same system? I've already said it works well, better than you could really hope for. The reason it's useless is the number of people who have an AMD and Nvidia card in the same system for the purpose of gaming is likely 20 people, all of them working at Nvidia, AMD or a dev playing around with it.

It's useless because no one is around to take advantage of it, not because it doesn't work. Explicit mgpu under DX12 is great, giving the devs control rather than the drivers and giving them the ability to use any number of cards resources as they see fit, not as a DX11 driver sees fit. But mixing and matching AMD and Nvidia cards is near worthless because even if support was added to ever game, 99.99999% of all gamers wouldn't run a system like that.

It doesn't matter what the GPUs are - just the performance/caps profile. People seem to be completely misunderstanding what it is capable of doing - and reasonably "easily" if you understand how your application works - for instance if you know a certain scene element is taking up say 25% of your GPU rendering time but doesn't require any resources relevant to the rest of the scene you can farm that work out to another GPU in the system aslong as it has the relevant performance and capabilities and then recombine it back into the final scene - doesn't matter if that is 2x nVidia 1080 or a 1080 with say a 290X aslong as they are both capable of the relevant feature level.
 
Everyone knows it, just even attempting to accept it out loud however...:D

Well now that nvidia are supposedly at long last.... delivering a dx 12 driver update that increases performance up to 16%, it will be very interesting to see if certain peoples tune changes ;) Not to mention, if there is indeed an improvement as much as there is for AMD... I can't wait to see nvidia saying about how great dx 12 and all the dx 12 titles, which we will now be rolling in :D :p ;)

Also, this should now confirm just which of nvidias cards do indeed have dx 12 support....
 
:eek:

What supposed driver?

I'll crack up if there's a performance driver 7 months after release, as surely only AMD can deliver underperforming hardware on release?

You saying amazeballs could be coming sooner rather than later?:p
 
It doesn't matter what the GPUs are - just the performance/caps profile. People seem to be completely misunderstanding what it is capable of doing - and reasonably "easily" if you understand how your application works - for instance if you know a certain scene element is taking up say 25% of your GPU rendering time but doesn't require any resources relevant to the rest of the scene you can farm that work out to another GPU in the system aslong as it has the relevant performance and capabilities and then recombine it back into the final scene - doesn't matter if that is 2x nVidia 1080 or a 1080 with say a 290X aslong as they are both capable of the relevant feature level.

Yeah, I'm still not sure you're getting it though. I was responding in the first place specifically to Kaap who was banging on about DX12 features that don't work, and used mixed cards as an example.

I didn't say it didn't work, I said it absolutely did and that Kaap was completely wrong.

Even in your example you're talking about making up specific grading of performance of various cards to work out how the workload can be split. I didn't say this can't work, I even pointed out that with DX12 mgpu, the developer is simply given all the resources and can choose what they want to do with them specifically rather than having a driver decide what to do. But to make that work on almost any combination means doing a lot of testing and working out what can be farmed out to what kind of card. Okay that feature takes 25% of the rendering time on a Fury X, so lets offload that to a 380x, but it turns out that feature is so much slower on a much slower card that now the Fury X is waiting on that data rather than moving on. There is balancing to be done, and the simply problem is, so so so few people have, or will ever have anything but matched cards in a system.

Again I was speaking specifically in response to mixed cards that Kaap said didn't work, not explicit mgpu or anything about how it works. But literally any work done, even if it's two hours of programming with a baseline performance for every known gpu with a table of if card X offload A through E, but not anything else, while card Y can have A through G and Z offloaded to it, costs money. That time can be spent on something else. if literally 10 users who play the game would ever want to use a Fury X with a 380x, 1080 with a 1050ti, then it's not time well spent and thus... useless.

I'm not saying it doesn't work, I really haven't talked about how it works specifically except that the developer has full control. I'm saying with almost no mixed card systems almost any effort spent trying to work out what can be offloaded for precisely the cards available in the system is something almost no dev would ever waste time doing.
 
Yeah, I'm still not sure you're getting it though. I was responding in the first place specifically to Kaap who was banging on about DX12 features that don't work, and used mixed cards as an example.

I didn't say it didn't work, I said it absolutely did and that Kaap was completely wrong.

Even in your example you're talking about making up specific grading of performance of various cards to work out how the workload can be split. I didn't say this can't work, I even pointed out that with DX12 mgpu, the developer is simply given all the resources and can choose what they want to do with them specifically rather than having a driver decide what to do. But to make that work on almost any combination means doing a lot of testing and working out what can be farmed out to what kind of card. Okay that feature takes 25% of the rendering time on a Fury X, so lets offload that to a 380x, but it turns out that feature is so much slower on a much slower card that now the Fury X is waiting on that data rather than moving on. There is balancing to be done, and the simply problem is, so so so few people have, or will ever have anything but matched cards in a system.

Again I was speaking specifically in response to mixed cards that Kaap said didn't work, not explicit mgpu or anything about how it works. But literally any work done, even if it's two hours of programming with a baseline performance for every known gpu with a table of if card X offload A through E, but not anything else, while card Y can have A through G and Z offloaded to it, costs money. That time can be spent on something else. if literally 10 users who play the game would ever want to use a Fury X with a 380x, 1080 with a 1050ti, then it's not time well spent and thus... useless.

I'm not saying it doesn't work, I really haven't talked about how it works specifically except that the developer has full control. I'm saying with almost no mixed card systems almost any effort spent trying to work out what can be offloaded for precisely the cards available in the system is something almost no dev would ever waste time doing.

My point is you are thinking too specifically - carnal knowledge of how your game works + explicit multi adaptor means you can write those routines and then both homogeneous and heterogeneous setups can benefit not just people who have weird mixed GPU configurations - I agree that most developers aren't going to consider it worth their time at this point but I think people underestimate the future potential.
 
Admittedly it would probably run a lot faster ported to Vulkan or something - but back in the day City of Heroes could throw an insane amount on screen and still run acceptably - largely due to good programming and some innovative approaches - IMO ability to think outside the box and approach problems from methods not on the taught familiar path is holding back the level of detail, etc. on screen more than the API is.

if you want a lot of unique objects on screen, you need a far higher draw call limit. Most devs use instancing to get around the problem, where one draw call will draw multiple of the same object on screen at once, not necessarily from the same viewpoint.
 
if you want a lot of unique objects on screen, you need a far higher draw call limit. Most devs use instancing to get around the problem, where one draw call will draw multiple of the same object on screen at once, not necessarily from the same viewpoint.

There are all kinds of innovative ways to use batching to get around draw call limits, admittedly not ideal as they often require realtime manipulation of mesh data and the development of additional management routines running on the CPU - especially if you have things like physics in the mix.
 
There are all kinds of innovative ways to use batching to get around draw call limits, admittedly not ideal as they often require realtime manipulation of mesh data and the development of additional management routines running on the CPU - especially if you have things like physics in the mix.

Yeah, but as you said, that is adding a complex fix that can bring its own issues, to an already bad problem.
 
Yeah, but as you said, that is adding a complex fix that can bring its own issues, to an already bad problem.

That describes largely what a lot of game programming is (unless you are playing it safe and building something on unreal engine). Anything that isn't very vanilla use of DX by the book, etc. which is often needed to make advances in game engines tends to be somewhat complex.
 
Dx 12 for me still make my system unstable. Now im not sure what is going wrong but if I select the DX12 option on Battlefield 1 the game simply crashes and upon reload, insta crashes again. I have to manually disable via config to allow me back in.
 
Back
Top Bottom