Caporegime
- Joined
- 18 Oct 2002
- Posts
- 33,188
errrm, that's exactly what AMD are doing with their tessellation override, the fact that users can't notice it with their eyes is irrelevant. I can't tell the difference between high and ultra textures in games but do I want a driver setting which ignores my requests for ultra textures and forces the use of high instead? hell no.
If a game calls for level 32 tessellation and people are using AMD drivers to force 8 or 16, they are not rendering the same scene as NVidia and seeing the game as the developer intended, simple as. AMD can make as many excuses as they want but the fact is if their tessellation performance was good enough they wouldn't have gone anywhere near such a thing.
Another very fundamental thing you're completely overlooking is that with a black box api like gameworks, where devs are offered it cheap without code or expensively with source code.... you can't make the claim that AMD and Nvidia are producing the same IQ. Nvidia control the code, there is absolutely no IQ improvement going from 8 or 16x tessellation to 64x. Inside that black box code all they have to do is have code that says if AMD card present, use 64x tessellation, if Nvidia card is present, use 8x tessellation.
Again very simple draw a triangle, keep sub dividing it into two smaller triangles... can you do this infinitely or does it effectively become a line? If you start with a very small triangle to begin with do you think there is much benefit doing it more than 4 or 8 times?
You claim Nvidia are running a higher level and AMD are overriding this, for one thing as default the override is NOT enabled meaning they aren't cheating at all. They are running whatever absurd level Nvidia is telling their code to run for AMD hardware. You don't know what level Nvidia is running because it's their code. Fundamentally though, 64x tessellation gives ZERO benefit, only a performance hit, so you have two options. Nvidia are hurting Nvidia users performance on purpose to get them to upgrade hardware OR in their own code or in the drivers they optimise to run the best compromise between performance and IQ.
Can you say for certain that Nvidia are running a higher level of tessellation than AMD? Nvidia could have a different path in the code for their own hardware or could be doing precisely what AMD do in drivers, but as a standard option which the user can't enable.
If Nvidia isn't doing one of those two things, they are simply incompetent. There is no two ways about this, you can't say but or if, over tessellation beyond a point of visual difference hurts performance with not a single benefit to the end user. So Nvidia either aren't doing it, are doing it on purpose or are so incompetent they are doing it by accident..... and it's 100% not the latter.