• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DirectX® 12 for Enthusiasts: Explicit Multiadapter

Nvlink and AMD data fabric should improve gpu to gou communication. At least from what I read the latency for AMD CDF is less than that of PCI-E which is a bigger factor than bandwidth in this situation.

But low abstraction API's were needed for a long time. We have always been cpu bound when it comes to all graphic situations. Devs should not have been jumping through hoops just to get something to work. Not a matter of optimisation, just them being forced to work within a tiny envelope compared to a console for draw calls.

Is why I give the example of GTA3 on DX9. It was not that the game was badly optimised, the game engine was very optimised to be able to work on the 360 and ps3. it was DX9 that needed the cpu grunt because of the game being open world and the tiny draw call limit of DX9.

So DX10 would have improved things from its higher draw call limit. But even DX 11 holds back GTA V.

We should have had these api's years ago in some form.
 
Both NVlink and data fabric are aimed at HPC
Nvlink allows 80-200gb/s
Data fabric up to 100gb/s

PCIe 4 will be coming to desktop way before either of these and will top out at 31gb/s for a x16 slot
 
I am sure I read that data fabric is coming to Zen cpu's and Greenland gpu's as part of their Global Memory Interfaces. and data fabric has only been confirmed to run at 100gb over 4 links so can theoretically run at 400gb over 16.

They did not state a max number of links available for inter-chip communication just the 4x links in the HPC APU.

And they said chips that support CDF will use it to communicate instead of PCI-E. The GMI can use PCI-E as a fallback for chips that don't support CDF.

So CDF will drop next year on desktop with Zen and Greenland if all their new products support it. As long as it works over the same traces as pci-e I assume.

But my assumption to it dropping on desktop is no different to people assuming NvLink will be used between gpu's on desktop instead of SLI bridges with pascal.

But i base my assumption on the fact they are using CDF to replace HyperTransport. Which is used on desktop AMD boards. And since Zen CPU and Greenland ASIC were talked about indivisually and not just APU's.
 
Last edited:
The difference here is everyone can now use DirectX 12 Devs now have more of a reason to push games forward.. Mantle was AMD only only two games pushed something new under Mantle and that was SFR in Civilization Beyond Earth first game I believe to support it officially and Thief used some crossfire Mantle tech but I forget the name sure AMDmatt will help me here.

Back to DirectX 12 though after all this not a Mantle thread.
They is loads of Graphics that are very demanding that a game will benefit from using on another GPU almost all demanding Shadow effect at much higher details, or Smoke effect and Particles or even just Draw Distance

You correct about, not everyone will see the same effects if they dont support the hardware but dont we already have this now? on BF4 I can switch down settings or enable settings other players might not be running why this any different?



Yeah even that man very demanding! They is all kinds I would love too see! :D

But wouldn't most of the things you've listed be possible with more GPU power anyway?
It all seems like it's just pushing current effects a little bit harder. It seems the big need to do that is more GPU power, not the ability to use 2 cards independently.

You sorta made my point, as we currently already have, these things seem like superficial effects that don't really affect the games that much. I don't see how the things you've listed are going to be Earth shatteringly game changing.
 
But wouldn't most of the things you've listed be possible with more GPU power anyway?
It all seems like it's just pushing current effects a little bit harder. It seems the big need to do that is more GPU power, not the ability to use 2 cards independently.

You sorta made my point, as we currently already have, these things seem like superficial effects that don't really affect the games that much. I don't see how the things you've listed are going to be Earth shatteringly game changing.

Same could be said about multi GPUs and frame rate, what the point in crossfire or sli if all we need is a more powerful GPU..

Maybe the effects you can take advantage of here would need pure GPU power to perform and without reducing performance on the other GPU the second GPU performance these highly demanding tasks.
 
Same could be said about multi GPUs and frame rate, what the point in crossfire or sli if all we need is a more powerful GPU..

Maybe the effects you can take advantage of here would need pure GPU power to perform and without reducing performance on the other GPU the second GPU performance these highly demanding tasks.

Agreed, but isnt that why people buy multiple gpus, because 1 isn't enough?
Do any of these effects require this new tech rather than just the power afforded by having 2, 3 or 4 gpus?
 
The main one that will affect the look of games is the ability to use hundreds if not thousands of real light sources in low abstraction API's. Will make games look more realistic.
 
There was a demo released recently of the Resident Evil mansion in UE4. The lighting looks amazing, especially the chandelier/windows towards the end. Would love to get a playable version for the CV1.
 
He doesn't know and doesn't care, it just more AMD PR that is making him wet with exvitement

Bah, Bah, bah, bah, bah, bah

Nothing to do with AMD at all.. Everything am excited about will run on nvidia or amd reason it's DirectX 12

Go troll elsewhere Mr green.
 
But weren't you looking forward to these features?
If you don't know how they can be used that just seems overly optimistic...

You get an idea of what can be done and I have expressed that enough already.. Will this be used how I see it I Don't KNOW the bloody answer.

Like I said who knows what's around the corner for all we know amd and nvidia could be working on something big that will allow all this to be added to games..

Another API also supports this and was talked about when it was released if you want to do more research use Google to find out what could happen in the future.
 
You get an idea of what can be done and I have expressed that enough already.. Will this be used how I see it I Don't KNOW the bloody answer.

Like I said who knows what's around the corner for all we know amd and nvidia could be working on something big that will allow all this to be added to games..

Another API also supports this and was talked about when it was released if you want to do more research use Google to find out what could happen in the future.

I have no idea what can be done. Well that couldn't be done with multiple GPUs anyway. Except performance, although I'm still not sure how using 1 GPU for 1 thing and another for something else will be any better than using them both for the same thing.

It just seems like excitement over something we don't really know the potential of for the sake of getting excited over something. Or creating another DX12 thread...

I sometimes think people would get excited if there were reports Microsoft had added a MacGuffin shader and Placebo Engine to DX12...
No idea what they can do or if they do anything useful, but they're new so let's hype it up and get excited!
 
I have no idea what can be done. Well that couldn't be done with multiple GPUs anyway. Except performance, although I'm still not sure how using 1 GPU for 1 thing and another for something else will be any better than using them both for the same thing.

It just seems like excitement over something we don't really know the potential of for the sake of getting excited over something. Or creating another DX12 thread...

I sometimes think people would get excited if there were reports Microsoft had added a MacGuffin shader and Placebo Engine to DX12...
No idea what they can do or if they do anything useful, but they're new so let's hype it up and get excited!

Man I feel like I go around in circles with you sometimes.

OK I'll say again the reason why this could be a big thing.
Let's use something else SSAA very demanding kills performance. Now what would happen if you forced GPU2 to only be used for SSAA while the other GPU went along doing its normal things.
You wouldn't get the performance hit from running it all on one GPU.

Another uses could be a great level of real time lighting, or just physic based stuff.
They is countless things I can think of that this could be used..
We know what it can do, it's been talked about before from another api development. The question I don't know the answer for is what devs will choose to do with it.
 
I have no idea what can be done. Well that couldn't be done with multiple GPUs anyway. Except performance, although I'm still not sure how using 1 GPU for 1 thing and another for something else will be any better than using them both for the same thing.

It just seems like excitement over something we don't really know the potential of for the sake of getting excited over something. Or creating another DX12 thread...

I sometimes think people would get excited if there were reports Microsoft had added a MacGuffin shader and Placebo Engine to DX12...
No idea what they can do or if they do anything useful, but they're new so let's hype it up and get excited!

In theory not using AFR means you can make use of on chip graphics in parallel with your dedicated gpu because they don't need to be roughly the same speed or use the same exact memory assets (which also frees up memory for other things like better textures). The question is whether game devs can implement this in a way that will work on every imaginable setup and combination of gpu out there. It might be a huge pain that makes it a pointless exercise.

The fact that it's technically possible is interesting though. If dx12 makes mgpu support easier to implement then I'm all for it. Mostly I'm looking forwards to lower cpu overheads for mgpu support.

Anyone know when we'll see the first big dx12 game release?
 
Man I feel like I go around in circles with you sometimes.

OK I'll say again the reason why this could be a big thing.
Let's use something else SSAA very demanding kills performance. Now what would happen if you forced GPU2 to only be used for SSAA while the other GPU went along doing its normal things.
You wouldn't get the performance hit from running it all on one GPU.

Another uses could be a great level of real time lighting, or just physic based stuff.
They is countless things I can think of that this could be used..
We know what it can do, it's been talked about before from another api development. The question I don't know the answer for is what devs will choose to do with it.

But in both those scenarios you have ½ as much actual GPU powering the game. So unless the SSAA or real time lighting use 100% of the GPU, or at least as much as would be used by just Crossfire/SLI-ing them then you'll waste the extra grunt of the 2nd GPU that could've been speeding up the game.

It's like now, how many people with 2 970/980/980Ti/TitanX's run one card as a dedicated PhysX card instead of using it in SLI because minimising the hit from PhysX is more useful than having a 2nd GPU rendering the game?
 
I believe BF4 is getting a DX12 reworking but no idea how true that is. There's that fables game coming up, the most telling though is probably going to be ashes of the singularity.

If any of the games developers are likely to pick up the more unusual variations of mgpu support such as APU gfx it's likely to be one or more of it's mantle partners. In fact I'm pretty sure I've already seen a demo of Ashes of the Singularity doing just that.
 
I would love it if games started to use then on board graphics of the Intel or AMD chips to push things up a small notch. Most mainstream processors have an on board graphics chip so it would make sense for revs to start supporting it.

As an SLI user the thing that excites me most is the better framepacing and lack of stutter that will come with dx12. At the moment I don't have too many problems but there is always that one game where I feel I'm not getting full use of both cards because I have to turn something down or off to eliminate the stutter.
 
I would love it if games started to use then on board graphics of the Intel or AMD chips to push things up a small notch. Most mainstream processors have an on board graphics chip so it would make sense for revs to start supporting it.

I could see AMD potentially implementing support for that sort of thing in their drivers but Nvidia would never allow it. Currently their drivers prevent running an AMD card alongside a Nvidia card.
 
Back
Top Bottom