• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

CPR W3 on Hairworks(Nvidia Game Works)-'the code of this feature cannot be optimized for AMD'

Quick test running about novigrad,no difference for me on 1.03 patch and TW3 game driver on a 970.

Popin/shimmering seemed a little less prevelant.But could just be to do with the part of town i was in.

Edit:Maybe its a keplar specific thing?
 
Someone here confirmed that Gameworks trashes performance even with settings disabled.

Remove files, get 10fps boost. Put em back, FPS takes a dive. Pretty much confirms my suspicions that gameworks contains "stub" code with a logic bomb designed to sabotage AMD & n-1 geforces.

its not completely tinfoil,


Premise 1 - most are games developed for console 1st.
Premise 2 - console hardware/gpu performance is somewhat fixed for the console's lifetime.
Premise 3 - lifespan of console c. traditionally 2+ gpu generations

Possible Outcome
How do PC GPU manufacturers sell the value in their {new} products... if products are nerfed somewhat by software design and questionable 'ultra' presets introduced ....yes..this would help..
 
It didn't make any difference for me on an AMD card.....

Quick test running about novigrad,no difference for me on 1.03 patch and TW3 game driver on a 970.

Popin/shimmering seemed a little less prevelant.But could just be to do with the part of town i was in.

Edit:Maybe its a keplar specific thing?

Just ran through the same area i use to test this with on a AMD 5850 with and without mentioned files(all settings minimum @ 1080) and it had no effect. So its seems to only be an issue on my 780. Gonna dig deeper into this. Got nothing better to do anyway :P
 
Just like to point out that after doing some reading and being spurred on by a user on OCN, I asked a visual artist who has a considerable background in DirectX programming. Now the reason I asked him is because tessellation factors and control count are actually controlled in the DX pipeline by the developer, NOT by NVIDIA. A common misconception with all the baseless tinfoil. The funny thing is this is actually pretty darn fundamental to how the API operates, and to me just spells out a clear lack of understanding as to what the GW libraries actually do and are for - they are the isometric result of technologies compiled by NVIDIA. It is up to the developer to control the factors.

Tessellation is very much driven by the source data, the required tessellation method and the level of fidelity (tessellation factor) required. Rather than sending geometry to the card as primitives (e.g. triangles) it is sent as patches (either tri/quad patches or isolines). To manage the entire tessellation process in hardware it was necessary to introduce additional stages to the pipeline; namely the hull shader, tessellator stage and the domain shader stage.

The hull shader is programmable and takes the incoming patch (tri, quad or line) and produces a corresponding geometry patch along with patch constants. This is then passed to the tessellator stage which takes the 'context' (domain) of the geometry patch and samples it to break it down into a higher density object structure (triangles, lines or points), connecting all these samples. Each sample in that domain is then passed to the domain shader and this is used to calculate a vertex position for that sample (i.e. the newly generated vertex position for the higher-detailed resulting geometry). The vertex can then be passed to the pixel shader as it would for a non-tessellated scenario. (This is a bit simplified but hopefully gives the general idea...)

The amount of tessellation is determined by the developer (which may vary depending on target system performance, etc). They effectively specify how much tessellation they want and then kick the process off. The low order surface (patch data) of the source geometry is sent to the hardware along with values stating the level of tessellation desired and whether the hardware is to deal with quad, tri or line data. Once that has been passed to the hardware, the GPU then wholly takes over and calculates the subdivided mesh along with additional control points, patch data, etc. The number of control points generated per patch is determined by the domain type and topology (again, quad, tri, etc). and the number of patches that need to be processed.
 
Last edited:
Its hard to fully compare but by renaming that file, I think I seen a slight drop in GPU usage. Which in my case does help my Gtx 470 from hitting 100% at my current settings and then dropping a few frames.
 
Back
Top Bottom