• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DirectX 12 IHV Support Explained: Maxwell’s Feature Level 12_1, GCN’s Resource Binding Tier 3 and In

Soldato
Joined
2 Jan 2012
Posts
12,385
Location
UK.
The DirectX 12 standard and specification is a tricky thing to understand at the best of times, but with companies throwing around things like “full DirectX 12″ support – it gets even more difficult. Nvidia has been marketing the GM200 as the first GPU with full DirectX 12 support, while AMD has been offering Resource Binding at Tier level 3. Intel has remained mostly silent on the subject but their GPUs have featured Raster Order Views at Feature Level 11_1 since a long time.

So who exactly has the mythical full DirectX 12 support down to the last digit? The correct, technical answer: no one.


Read more: http://wccftech.com/directx-12-supp...urce-binding-tier-3-intels-rov/#ixzz3ckvKMwYk

basically, all GPUs with FL 11_0 to 12_1 support can run DirectX 12 API completely and fully.
 
so according to their conclusion, if my GPU doesn't support conservative rasterisation I can still run that feature... I think they might have that a bit sideways
 
sIOszMq.gif
 
Everyone who wants the extra fps boost will get it if it supports any of the dx12 tiers. The other things are just nice to have things to help with gfx effects. Is basically what i think that and most articles are saying. So even my 670 will have the famed dx12 boost of fps due to the overhead on cpu moving to gpu due to better threading etc.
 
Tbh it looks to me like the final "full" DX12 specification (12_1) wasn't finalized until really late in the development cycle ..... so the GPU vendors kind of gambled on what was going to be needed. AMD on resource management and Nvidia on rasterization + rasterizer-ordered.

Luck wasn't in AMD's favor this time around.


That being said we are still a LONG way off games being developed to make use of most of this stuff.
 
Tbh it looks to me like the final "full" DX12 specification (12_1) wasn't finalized until really late in the development cycle ..... so the GPU vendors kind of gambled on what was going to be needed. AMD on resource management and Nvidia on rasterization + rasterizer-ordered.

Luck wasn't in AMD's favor this time around.


That being said we are still a LONG way off games being developed to make use of most of this stuff.

It might be true that 12.1 level was added late, but it's only good thing it made in spec. Doesn't matter who gets it first, as long as every feature that is actually beneficial gets integrated to DX12. I'm sure there are tech from all 3 Hardware vendors in DX12.

If you miss a feature now, it's most likely to be in your next gen.
 
Per one of the posters "Dave D" in the comments, this is basically all you need to know:



More cr*p from tech 'journalists' that don't even know the shader and texture management units on GCN architecture exist in a FIXED ratio. When your technical knowledge is this poor, anything you have to say will just be PR dribble.

Let me make it EASY. DX12 and Vulkan are about doing lots of WHOLLY INDEPENDENT things AT THE SAME TIME under the explicit control pf the application. Nothing else- nothing else whatsoever.

Other so-called DX12 stuff, promoted by an insanely DISHONEST Nvidia, are about 'features' that can just as well exist in a prehistoric API like DX11.

Game engine coders DO NOT WANT more weird and freaky cr*p from Nvidia and AMD (like Nvidia's impossibly slow ability to have shaders sample into the MSAA buffers). No, they want EXACTLY the same things as DX9-DX11 already give, but with the ability to explicitly issue independent command chains to independent control units on the GPU. AMD currently has lots of control units on its GPUs that can each work on completely unique tasks. Nvidia only has REPLICATING control units, that can break down a SINGLE TASK and set that task working in parallel on multiple GPU blocks.

The best way to think about the needs of DX12 is to consider your CPU. If you are a gamer, your CPU will have 4+ CORES, and the cores are defined as entities that can process wholly unique and independent instruction chains. When using the 4-cores of your CPU, for instance, you do NOT have to find algorithms that can break down to 4 IDENTICAL threads that will simultaneously run on each core (as is the case with Maxwell and Kepler). NO- you can have 4 UNIQUE and unconnected algorithms running at the same time, one per core (as is the case with AMD's GCN design).

Every tech site will be using the above analogy to 'explain' DX12 when Pascal arrives, and Nvidia finally joins AMD in having a TRUE DX12 architecture. Just as every tech site said 2-core CPUs were a joke when only AMD made them, but all agreed 2-core CPUs were the best thing since sliced bread when Intel finally got round to offering the same thing.

But until Pascal arrives, sites like this will carry on (very profitably) parroting Nvidia FUD, hoping you are too ill-informed to notice.
 
What he's saying is true if you take out the clear bias lol. DX12 is all about removing the magic middleware smoke screen and giving developers the ability to control resources directly rather than only hinting at what they want them to do. Unfortunately as a consumer most people don't really care about this, so marketing such a thing becomes tricky without a visual element to really get people interested.
 
Back
Top Bottom