• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DX12 980TI

Associate
Joined
24 Jun 2015
Posts
82
ok I was pinning my hopes on a fury but because AMD have lost the plot , I am going to get a 980TI.

Question, gears of war 4 is going to be dx12 based so will the 980ti have all the latest DX12 API and stuff or will it be fury only at this point.
 
The way its working is that AMD are stating DX12_0 feature list, nVidia are claiming DX12_1 feature list.

As has been discussed on several other forums and I think this one (in one of the fury/980Ti threads), even on Microsoft's MSDN, there doesn't seem to be any mention of DX12_1 in their documentation which fully explains this.

This may come down the line in a few years but who knows what MS has planned for DX12 at this point :)

To be honest, by the time a DX12_1 is out or DX moves on, the 980Ti and Fury will be old tech and probably replaced with some new lightening fast cards so its not really an issue. Both makers support DX12 essentially is what I am saying. Go with whatever you can afford/prefer :)
 
Last edited:
thanks guys, gears of war will have dx12 usage fully. and that's a few months away

Whichever card you go with, both will support DX12 so all is good :)

It's whatever your purse strings can stretch to and personal preference on card etc :)

Just get the best bang for your buck :)
 
The way its working is that AMD are stating DX12_0 feature list, nVidia are claiming DX12_1 feature list.

As has been discussed on several other forums and I think this one (in one of the fury/980Ti threads), even on Microsoft's MSDN, there doesn't seem to be any mention of DX12_1 in their documentation which fully explains this.

The thing is if NVidia support features that AMD don't whether it's in the DX12 spec or not you can be sure those features will be used in GameWorks titles.
 
The thing is if NVidia support features that AMD don't whether it's in the DX12 spec or not you can be sure those features will be used in GameWorks titles.

That it will be, but you can still play without the GameWorks part :) Just an added bonus if you go with nVidia.

Or you go AMD and get Mantle/Vulkan. So both have "extras" so to speak :)
 
Last edited:
Win10 Build 10166:

s028z.jpg

You can bet 12_1 is part of the official spec.
 
And by the time they're actually used the fury and ti will be getting dug up by archeology students from the future.
I'm not so sure, I thought the main purpose of Dx12 was to make these features significantly easier to use for developers (such as multi core cpu/multi GPU usage) along with the other new features.

Time will tell of course, but I've got a feel some devs will jump on it, I'm looking forward to the next Crysis game at least :)
 
With the way devs work, we will be using direct ports from the Xbox, so I imagine we will be using these current DX12 GPUs for quite a few years :D
 
That it will be, but you can still play without the GameWorks part :) Just an added bonus if you go with nVidia.

Or you go AMD and get Mantle/Vulkan. So both have "extras" so to speak :)

Mantle is no longer supported.

Nvidia will support Vulkan as well, just like they have superior openGl drivers.
 
I'm not so sure, I thought the main purpose of Dx12 was to make these features significantly easier to use for developers (such as multi core cpu/multi GPU usage) along with the other new features.

Time will tell of course, but I've got a feel some devs will jump on it, I'm looking forward to the next Crysis game at least :)

For the most part DX12 makes things significantly harder for the developer. But for developers that have the time and resources then they can extract more power with less overhead.


It is a bit like going forma high level programming language like Python and then coding in C. Much harder but you are less abstracted from the hardware and have minimal overhead. You can shoot yourself in the foot much quicker and have to take care of memory management and all the other thing high level languages and APIs take care of for you.

There are things that might be a little easier because they are designed from the ground up with those features in mind. Multi-GPU wont be a hack anymore.
 
Back
Top Bottom