• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia and AMD DX12 Feature Levels ?

As has been said, it is all irrelevant until at the moment.

Also haven't we had several versions of that chart, from different websites all saying different things. I get the feeling that these sites are not exactly sure who does actually support what, not that it matters anyway.
 
Paid Nvidia lobbying groups.
they are rampant on many various forums especially here.

Now that's just being silly. This forum is decent and just a tiny minority ruin it. I am sure by the time we have a plethora of games, AMD will have a new GPU out that can cope with all the settings needed.
 
Now that's just being silly. This forum is decent and just a tiny minority ruin it. I am sure by the time we have a plethora of games, BOTH SIDES will have a new GPU's out that can cope with all the settings needed.

Fixed that for you Greg. :)
 
It's not a moot point at all, NVidia's GameWorks is a broad programme intended to speed up the adoption of NVidia GPU software/technologies. GameWorks will be sure to expose any features that AMD are lacking, like the way they have exposed AMD's weak tessellation.

AMD's Tessellation isn't weak, it is a bit weaker when pushed to the absolute extremes but in doing that Nvidia are also strangling the performance on their own GPU's.
 
AMD's Tessellation isn't weak, it is a bit weaker when pushed to the absolute extremes but in doing that Nvidia are also strangling the performance on their own GPU's.

AMD's tessellation is weak and has been as far back as the 5xxx series. It is something that they know but never seem to improve on.
 
AMD's tessellation is weak and has been as far back as the 5xxx series. It is something that they know but never seem to improve on.

@ 64x, worst case it amounts to about 10% of in-game performance difference, 64x is three or four times higher than any developer would normally use, normal would be 8x to 16x anything over that makes no visual difference but strangles the performance on all GPU's that have to run it, including Nvidia.
If i was an Nvidia owner i would be far more concerned at this practice as i'm suffering from it as much as AMD users.

Its far from weak, its just weaker, under normal circumstance its far more than strong enough.



AMD are more powerful in other areas.




Exploiting a comparative weakness actually is not a good thing because your own users suffer.
 
Last edited:
@ 64x, worst case it amounts to about 10% of in-game performance difference, 64x is three or four times higher than any developer would normally use, normal would be 8x to 16x anything over that makes no visual difference but strangles the performance on all GPU's that have to run it, including Nvidia.
If i was an Nvidia owner i would be far more concerned at this practice as i'm suffering from it as much as AMD users.

Its far from weak, its just weaker, under normal circumstance its far more than strong enough.



AMD are more powerful in other areas.




Exploiting a comparative weakness actually is not a good thing because your own users suffer.

Is 60 fps enough for gaming ?
 
It's not a moot point at all, NVidia's GameWorks is a broad programme intended to speed up the adoption of NVidia GPU software/technologies. GameWorks will be sure to expose any features that AMD are lacking, like the way they have exposed AMD's weak tessellation.

It is when the previous generation consoles at most supported dx9 like features and we saw hardly any adoption of dx10 and it took years for dx11 to be important for a long time.

Yet we still had AMD and Nvidia trying to push their features during that time.

By the time dx12 is going to be important we will be probably well into the lifespan of Pascal and Arctic Islands.
 
@ 64x, worst case it amounts to about 10% of in-game performance difference, 64x is three or four times higher than any developer would normally use, normal would be 8x to 16x anything over that makes no visual difference but strangles the performance on all GPU's that have to run it, including Nvidia.
If i was an Nvidia owner i would be far more concerned at this practice as i'm suffering from it as much as AMD users.

Its far from weak, its just weaker, under normal circumstance its far more than strong enough.


AMD are more powerful in other areas.



Exploiting a comparative weakness actually is not a good thing because your own users suffer.

As shown by your own chart, AMD have poor tessellation performance and shows they are weak. Not trying to get in an argument and sure, AMD are better in some things than others but when you look at game performance/bench performance, it starts to show where AMD need to improve. There is a tessellation slider for AMD users to lower quality in the CCC and that is there for a reason.

Edit:

Just to say, I don't feel the performance was that bad on a Fury X in TW3 which uses a lot of tessellation in HairWorks, so they have indeed improved but I could run with everything on at 1440P and a single TX but sadly couldn't do the same with a Fury X and needed to lower settings to get playable performance.

DX12 will be the benchmark for both AMD and Nvidia and we are still some time off from knowing what differences ROV's etc will make.
 
Last edited:
Hair Works in W3 is very over Tessellated, and still looks like straw, read on...

As shown by your own chart, AMD have poor tessellation performance and shows they are weak. Not trying to get in an argument and sure, AMD are better in some things than others but when you look at game performance/bench performance, it starts to show where AMD need to improve. There is a tessellation slider for AMD users to lower quality in the CCC and that is there for a reason.


It looks very much like you are trying to get into an argument, your ignoring what i'm saying or simply fail to grasp it, whatever it is you keep repeating the same crap despite what i'm telling you.

AMD's Tessellation not "poor" it is not "weak" is has far more performance than is required for any game developed using normal Tessellation parameters, put another way AMD's tessellation performance limits are never reached in any game that isn't artificially and deliberately had pollies and vertices added to the pixel density, you cannot see more than one polly per pixel, the only reason to do that is to slow the GPU artificially, it has the same effect on Nvidia GPU's but to a slightly less extent.

Probably one of many reasons why Game Works Games run like crap on Nvidia and AMD GPU's.

Any developer working independently will never use Tessellation to such extremes, it would be an indictment on his own work and pride in it, it would be a pile of junk in how well or not it ran. (most Game Works Titles)

Developers have far more room than they need to play with in AMD's Tessellation performance, there for it is not "poor" or "weak"
 
Last edited:
Humbug before Fury X launched you were confidently predicting that tessellation performance would be better than NVidia has with Titan X, now that it's apparent that Fury X is barely any better than R285 we're back to it not being necessary.

The fact is if AMD's tessellation performance wasn't still stuck in 2012 AMD wouldn't be complaining about "over-tessellation" and cheating in their drivers.
 
Last edited:
The extreme tessallation reminds me of the Crysis 2 (or was it 3?) where they even had massive tessellation on concrete blocks, and were rendering water even though it wasn't in the scene. Stupid crap to make one company look good when it really wasn't valid.
 
Humbug before Fury X launched you were confidently predicting that tessellation performance would be better than NVidia has with Titan X, now that it's apparent that Fury X is barely any better than R285 we're back to it not being necessary.

The fact is if AMD's tessellation performance wasn't still stuck in 2012 AMD wouldn't be complaining about "over-tessellation" and cheating in their drivers.

You just made that up.

The extreme tessallation reminds me of the Crysis 2 (or was it 3?) where they even had massive tessellation on concrete blocks, and were rendering water even though it wasn't in the scene. Stupid crap to make one company look good when it really wasn't valid.


To be fair Cryengine has an ocean stitched into the map layer, it stretches from Y to Z to X... and runs under every landmass, it can be turned off but its often used for rivers and other water bodies.

Having said that i'm not completely sure the option to turn it off was in the Crysis 2 engine, i didn't start using Cryengine until 3.5, Crysis 3 was done on 3.4 and Crysis 2 on 3.1.
We are now on 3.8 which is also capable of generating separate water volumes independent of the ocean.
 
The invisible water would be culled before it hit the GPU.

It's ironic that people bring up a game from 2011, as it seems AMD have not moved their tessellation performance on since that time. :p
 
The invisible water would be culled before it hit the GPU.

It's ironic that people bring up a game from 2011, as it seems AMD have not moved their tessellation performance on since that time. :p

I'm talking about dirty tactics that bring no visible benefit but heavily impact one vendor. I'm sure there are more recent examples.
 
Back
Top Bottom