• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia and AMD DX12 Feature Levels ?

The extreme tessallation reminds me of the Crysis 2 (or was it 3?) where they even had massive tessellation on concrete blocks, and were rendering water even though it wasn't in the scene. Stupid crap to make one company look good when it really wasn't valid.

It reminds me of when the NVidia FX series couldn't run FP32 precision very fast so they cheated by forcing FP16 instead, FP32 wasn't really necessary then either but who wants to buy a card that achieves the same level of performance as its competitor by cutting corners?

Even if high levels of tessellation are overkill now it won't always be.

AMD could have doubled tessellation performance in hardware on 390X simply by updating it to GCN1.2 (still **** but better than 2011 levels), their accepted driver cheats meant they had no incentive to actually improve the hardware so you all just got a glorified rebadge.
 
Last edited:
It reminds me of when the NVidia FX series couldn't run FP32 precision very fast so they cheated by forcing FP16 instead, FP32 wasn't really necessary then either but who wants to buy a card that achieves the same level of performance as its competitor by cutting corners?

Even if high levels of tessellation are overkill now it won't always be.

By that time we will have new GPU architectures.

I use more Vertices and pollies than most simply because i'm trying to put Hardware tech to its limits in anticipation of DX12, i run an AMD GPU, despite all that i'm not finding the Tessellation limits, tessellation is high because of the amount of 3D objects i use to bring the Draw Calls up and make my maps look fuller, its still no where near high enough to bother the 290's tessellation engine, if i wanted to bother it i would have to start multi veracity and polly pixel rendering, one would only do that to deliberately grind the GPU down.
 
The invisible water would be culled before it hit the GPU.

It's ironic that people bring up a game from 2011, as it seems AMD have not moved their tessellation performance on since that time. :p

That is where AMD should have learned and provided better tessellation support so as to not get caught out again. Seems they didn't :( I hope the same doesn't happen for AMD or Nvidia and they cry foul play when DX12 features are used and if one or the other can't run it.

They used such and such just to cripple performance on such and such hardware....I can see it now.

HairWorks.

Actually, I played with the tessellation slider on the Fury X (CCC) and it really didn't make that much difference to performance. I was expecting huge gains but it didn't happen.
 
That is where AMD should have learned and provided better tessellation support so as to not get caught out again. Seems they didn't :( I hope the same doesn't happen for AMD or Nvidia and they cry foul play when DX12 features are used and if one or the other can't run it.

They used such and such just to cripple performance on such and such hardware....I can see it now.



Actually, I played with the tessellation slider on the Fury X (CCC) and it really didn't make that much difference to performance. I was expecting huge gains but it didn't happen.


That is where AMD should have learned and provided better tessellation support so as to not get caught out again. Seems they didn't :( I hope the same doesn't happen for AMD or Nvidia and they cry foul play when DX12 features are used and if one or the other can't run it.

They used such and such just to cripple performance on such and such hardware....I can see it now.
It would start doing real damage to Developers if they go on releasing games that run like a bucket of rusty nails. (on all hardware)

The problem is with Game Works what they get is a black box of assets made by Nvidia which they then copy a paste where needed, if there is anything wrong with that asset they have to rely on Nvidia to fix it, like silly amounts of useless wasted Tessellation, for example.

So its upto Game Developers to argue it out with Nvidia or just not use Game Works assets, despite this its the Game Developer who gets blamed for the problems.
Something will have to give.

Actually, I played with the Tessellation slider on the Fury X (CCC) and it really didn't make that much difference to performance. I was expecting huge gains but it didn't happen
That's because Tessellation on Fury-X is already fairly high, reducing it is much more effective on older GCN hardware.
 
Last edited:
I am not getting into the semantics of GameWorks and black boxes and it has been done to death with AMD employee's retracting statements. I am happy to carry on with DX12 though :)
 
Nviida is now no longer the only one with Geforce 900 series supported DirectX 12 feature level 12_1. The biggest surprise is Intel now joined in with Nvidia, Skylake now supported DirectX 12 feature level 12_1 left AMD behind in the dust only supported DirectX 12 feature level 12_0.

The interesting thing with Skylake, they fully supported Conservative Rasterization Tier 3 while Nvidia only supported Tier 1 and Skylake supported other DirectX 12 features Cross Adapter Row Major Texture, VP And RT Array Index From Any Shader FRSWGSE, PS Specified Stencil Ref, UMA and Cache Coherent UMA Nvidia and AMD did not supported these other features.

http://www.pcgameshardware.de/Core-...ylake-Test-Core-i7-6700K-i5-6600K-1166741/#a3

Skylake also supported DirectX 12 Resource Binding Tier 3 same along with AMD while Nvidia Geforce 900 series supported Tier 2.
 
It reminds me of when the NVidia FX series couldn't run FP32 precision very fast so they cheated by forcing FP16 instead, FP32 wasn't really necessary then either but who wants to buy a card that achieves the same level of performance as its competitor by cutting corners?

Even if high levels of tessellation are overkill now it won't always be.

AMD could have doubled tessellation performance in hardware on 390X simply by updating it to GCN1.2 (still **** but better than 2011 levels), their accepted driver cheats meant they had no incentive to actually improve the hardware so you all just got a glorified rebadge.

Still spouting your tired, sad rhetoric about driver cheats. Its an in control panel option that can be totally turned off, so how is that a "cheat" exactly? Its a user controlled option, so deal with it.
 
Nviida is now no longer the only one with Geforce 900 series supported DirectX 12 feature level 12_1. The biggest surprise is Intel now joined in with Nvidia, Skylake now supported DirectX 12 feature level 12_1 left AMD behind in the dust only supported DirectX 12 feature level 12_0.

The interesting thing with Skylake, they fully supported Conservative Rasterization Tier 3 while Nvidia only supported Tier 1 and Skylake supported other DirectX 12 features Cross Adapter Row Major Texture, VP And RT Array Index From Any Shader FRSWGSE, PS Specified Stencil Ref, UMA and Cache Coherent UMA Nvidia and AMD did not supported these other features.

http://www.pcgameshardware.de/Core-...ylake-Test-Core-i7-6700K-i5-6600K-1166741/#a3

Skylake also supported DirectX 12 Resource Binding Tier 3 same along with AMD while Nvidia Geforce 900 series supported Tier 2.

By the looks of that chart Intel have AMD and Nvidia completely slammed :D

All hail Intel, i'm off to buy one of their GPU's....
 
People still point scoring about dx 12 level support. Wow. Because as we all know games are so quick at supporting all the features of a new api....oh wait.
 
By the looks of that chart Intel have AMD and Nvidia completely slammed :D

All hail Intel, i'm off to buy one of their GPU's....
I'll sell you one, 4790k chucked in, all on the same package. Hope you got very very good cooling.:D
 
I'm looking to go Intel soon ^^^^^

People still point scoring about dx 12 level support. Wow. Because as we all know games are so quick at supporting all the features of a new api....oh wait.

The whole DX12.1 is a joke, there is actually no such thing as far as MS are concerned, Nvidia support Conservative Rasterization while AMD don't so Nvidia calls that DX12.1.

Its a bit like AMD calling thier cards DX12.2 because they have Tiled Resources Tier 3 and Specified Stencil support while Nvidia don't, ridiculous DX12 one-upmanship.
 
Wonder how long it will be before we even see any DX12 games. Id hazard a guess that we will all be on newer cards than we have now by the time any sizeable number are out.
 
I'm looking to go Intel soon ^^^^^



The whole DX12.1 is a joke, there is actually no such thing as far as MS are concerned, Nvidia support Conservative Rasterization while AMD don't so Nvidia calls that DX12.1.

Its a bit like AMD calling thier cards DX12.2 because they have Tiled Resources Tier 3 and Specified Stencil support while Nvidia don't, ridiculous DX12 one-upmanship.

Actually, AMD support resource binding tier 3, nvidia support Tiled Resources tier 3, they are two different things and you are obviously getting confused

And yes MS do refer to DX12_1 within windows (not 12.1)
 
Wonder how long it will be before we even see any DX12 games. Id hazard a guess that we will all be on newer cards than we have now by the time any sizeable number are out.

Yup, but as usual people on here are making an issue out of it like the first round of dx12 games will exploit every feature it has.
 
The whole DX12.1 is a joke, there is actually no such thing as far as MS are concerned, Nvidia support Conservative Rasterization while AMD don't so Nvidia calls that DX12.1.

You're talking out of your bottom again humbug.

Microsoft have published the hardware feature levels here:
https://msdn.microsoft.com/en-us/library/windows/desktop/mt186615(v=vs.85).aspx

Its a bit like AMD calling thier cards DX12.2 because they have Tiled Resources Tier 3 and Specified Stencil support while Nvidia don't, ridiculous DX12 one-upmanship.

AMD can call it whatever they want, it doesn't change the fact that the hardware feature levels are set by Microsoft and can be seen detailed in the link I posted above.

Tier 3 resource binding is not needed for either 12.0/12.1, conservative rasterization/rasterizer ordered views which AMD are lacking is required for 12.1.
 
Last edited:
Nviida is now no longer the only one with Geforce 900 series supported DirectX 12 feature level 12_1. The biggest surprise is Intel now joined in with Nvidia, Skylake now supported DirectX 12 feature level 12_1 left AMD behind in the dust only supported DirectX 12 feature level 12_0.

The interesting thing with Skylake, they fully supported Conservative Rasterization Tier 3 while Nvidia only supported Tier 1 and Skylake supported other DirectX 12 features Cross Adapter Row Major Texture, VP And RT Array Index From Any Shader FRSWGSE, PS Specified Stencil Ref, UMA and Cache Coherent UMA Nvidia and AMD did not supported these other features.

http://www.pcgameshardware.de/Core-...ylake-Test-Core-i7-6700K-i5-6600K-1166741/#a3

Skylake also supported DirectX 12 Resource Binding Tier 3 same along with AMD while Nvidia Geforce 900 series supported Tier 2.

Good luck to ya gaming on your skylake IGPU setup! All the best!
 
Sounds like a E-PEEN thing this. What AMD supports than nVidia does not. What nVidia supports than AMD does not... YAWN....

However Humbug has made some very very valid points about gameworks in this thread! Everything else about DX12 support is nonsense!
 
Wonder how long it will be before we even see any DX12 games. Id hazard a guess that we will all be on newer cards than we have now by the time any sizeable number are out.

Probably not that long before we start seeing some Games but a "sizeable number" yeah late 2016 probably.
 
Back
Top Bottom