• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

I've asked Joel to sign up and he has done so but his posts are awaiting moderation it seems. Come on dons/staff members, pull your fingers out. :p

The lengths you'll go to at times Matt to try and gain a "win" is slightly worrying.

Once a few people in this thread started questioning his credibility because he was saying things they didn't like i thought it would be the sensible option to allow him a reply to the OcuK 'forum experts'. It was not my place to try and defend him or argue the points made as i lack the knowledge, as do many here. Joel on the other hand does not, is not biased, works with both companies and is a tech journalist. Much more qualified and he can clear up any questions much easier.

The whole "locking down on PhysX" approach that Nvidia pulled when a AMD GPU is detected despite having a Nvidia card as dedicated PhysX card is already enough to cause for concern on how the "Closed Library" will be handled (or abused). Anyone that say they are not worried and it's not gonna be an issue (even with the PhysX lock down approach considered), I don't know if they are naive, or have blind faith in Nvidia...

Looking at Tommys and pgi's results, something is not right. As i said before and it was explained away as being 'perfectly normal'. You should never see a 660 beating a 7950 boost or a 770 besting a 290X. I still think theres more to come from this.
 
8x MSAA, this time with a fairly easy OC to hit...

8xmsaaOC.jpg

1189/6600.
 
GameWorks is not worrying because it directly gives NV a performance advantage in AO. GameWorks is worrying because the long-term impact of giving one company control over another company's performance is intrinsically problematic.

GameWorks is risky because we've already seen what happens when one company can use a software position to reinforce a consumer perception of substandard hardware. By the time we had proof that Intel's compiler refused to optimize for AMD CPUs, even though AMD had paid Intel for the right to use those optimizations, the damage was done.

That needs to not happen again.

The thing I find funny is that you can basically replace the word GameWorks with Mantle in both those quotes and they would still be perfectly applicable, the only difference is they would get you flamed ^^

I'm not saying either is good, I think both cases suck, I just think it's funny that when Nvidia do something it's bad cos their evil but when AMD do it then it's great because the're the underdogs and stuffs :P
 
The thing I find funny is that you can basically replace the word GameWorks with Mantle in both those quotes and they would still be perfectly applicable, the only difference is they would get you flamed ^^

I'm not saying either is good, I think both cases suck, I just think it's funny that when Nvidia do something it's bad cos their evil but when AMD do it then it's great because the're the underdogs and stuffs :P

This is the difference.

Mantle: Optimized for AMD. Does not prevent Nvidia from optimizing its drivers for DX11 games.

GameWorks: Optimized for Nvidia. Prevents AMD from optimizing its drivers for DX11 games.

If you do not understand how these things are different when it's broken down in that fashion, I do not know how to explain it to you. GameWorks prevents AMD from ensuring games run well on AMD hardware. Mantle does NOT prevent NV from optimizing games for NV hardware.
 
For what its worth, the game doesn't scale very well in SLI either, around 50% scaling over 2 cards at 1189/6600 (105 avg vs 155 avg)

I think this says more about the game/engine being a bit ropey then it does drivers and lockouts.

Personal thoughts are; far more is being read into this then is probably required, the game engine isn't perfect, WBM took over development from RockSteady for Origins, perhaps that's showing. Prior to Origins the only other game they had developed was another Batman game for the Wii U, having only formed in 2010 perhaps they've just made a rookie mistake on development of an engine they didn't create...

Edit: Origins was actually their maiden game, the Wii U version was actually delayed along side the PC version, releasing after the console versions. Reading into it further suggests even the console chaps had a hard time with performance on this title.
 
Last edited:
Looking at Tommys and pgi's results, something is not right. As i said before and it was explained away as being 'perfectly normal'. You should never see a 660 beating a 7950 boost or a 770 besting a 290X. I still think theres more to come from this.

but none of that proves that it is gameworks that is responsible - the article correctly points out that disabling the gameworks features such as HBAO+ or PCSS show the same performance hit by enabling these on both sets of hardware

the article states that these being "black box" libraries prevents AMD from optimising for them, however the article also states that HBAO+ started off with a bigger hit in Ghosts on AMD however AMD released a driver update that fixed this

AMD can run their own drivers in debug mode and see every command being sent to them and then on to the GPU itself, they can intercept these calls and change the way the drivers interact with the GPU, they did this in the case of HBAO+ in Ghosts, they can do this for any Nvidia library that does run on AMD hardware

the article started off by saying that the devs couldn't see the source code, this has now been proven to not be true

the only thing that remains is that for some unknown reason unrelated to game works, WB decided not to include some AMD provided code that would improve the games performance, but performance that is in no way related to the games use of GW because disabling GW doesn't improve the games performance any more on AMD than doing so does on NV hardware
 
So FXAA which was created by a guy at nVidia is not so good on AMD. I wonder what MLAA would look like on my Titan and what performance a, 680 for example would get with this DirectCompute AA version? MSAA seems to be good with both to me.

Are AMD missing out on TXAA? "Yep" are AMD missing out on PhysX? "Yep" Are AMD missing out on GameWorks? "Yep" Are nVidia missing out on Mantle? "Yep"

Sorry if I am missing something but no matter what way it is buttered up, nVidia would need to work with AMD to get Mantle and AMD would need to work with nVidia to get PhysX (which was offered exactly the same as Mantle was back in 2008).

http://www.techpowerup.com/64787/ra...ffered-to-help-us-expected-more-from-amd.html

In a dramatic turnaround of events, NGOHQ.com, the creators of a special system software that allowed users of the ATI Radeon graphics accelerators to use proprietary features of NVIDIA graphics accelerators such as GPU-accelerated version of the NVIDIA PhysX game physics API, claim that in fact NVIDIA wanted to help them with this effort. On June, the 26th we had covered reports of the said outfit improvising a driver after proving that NVIDIA's proprietary GPGPU architecture, CUDA was flexible enough to work on a ATI RV670 graphics processor.

http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

Nvidia’s director of product PR for EMEA and India, Luciano Alibrandi, told Custom PC that ‘We are committed to an open PhysX platform that encourages innovation and participation,’ and added that Nvidia would be ‘open to talking with any GPU vendor about support for their architecture.’

So yes, AMD could have had PhysX and we could all be playing games with fantastic effects like water, fire, cloth etc but ATI decided not to bother but now it is an issue not getting PhysX and GameWorks? Please......
 
Having read this thread without commenting for a bit it seems like there's just a degree of ignorance taking place and no matter what is said viewpoints don't alter and the same things get regurgitated which are tangents to fresh points being made. This is disappointing and doesn't reflect well on this forum.

I know what my stance is on the matter (non story and I don't really care even if AMD can't optimise as their performance is acceptable) but my own points got ignored and "countered" with things I didn't even originally talk about. The quality of debate is really poor at times and I find it shocking that there's such ingrained views without factoring in anything else said.
 
This is the difference.

Mantle: Optimized for AMD. Does not prevent Nvidia from optimizing its drivers for DX11 games.

GameWorks: Optimized for Nvidia. Prevents AMD from optimizing its drivers for DX11 games.

If you do not understand how these things are different when it's broken down in that fashion, I do not know how to explain it to you. GameWorks prevents AMD from ensuring games run well on AMD hardware. Mantle does NOT prevent NV from optimizing games for NV hardware.

That's a difference in approach, not effect.

Mantle: The game will run better on AMD hardware than equal power Nvidia hardware.

GameWorks: The game will run better on Nvida hardware than equal power AMD hardware.
 
your article is saying that AMD have no ability to look at these calls and then rewrite their drivers to suit, would this be easier with the source code? Yes, essential? No.

AMD cannot effectively or simply patch the driver. I believe it is still possible to eke out some gains by careful output analysis combined with hand-tuning the driver in assembler. The fact that you're tuning in *assembler* rather than HLSL is part of the problem here.


Again, as you point out, if you turn off the gameworks features and the performance increase/decrease is the same, how can you draw the conclusion that it is a gameworks feature that is causing the problem?

Because under the old system, both companies get access to the HLSL code for fine-tuning. Let me ask you this: Why should the penalty for HBAO+, or god rays, or soft shadows all be assumed to be equal across an NV card vs. an AMD card? After all, the two use very different architectures. In fact, we know the penalties aren't always identical. BioShock Infinite takes a heavier hit on GCN chips when the "Alternate DOF" field is used.

A scenario in which Nvidia and AMD cannot optimize a driver in the same fashion for the same API is a tilted playing field, even if the impact of the feature is identical. Nvidia can optimize. AMD cannot.

I see this as nvidia being damned if they do (physx, txaa) and damned if they don't (hbao+, pcss)

I don't. I'm a fan of PhysX and TXAA both. I used TXAA on my own playthrough of BAO.

Other gameworks titles dont show the same bias. You even mention that the hbao implementation in ghosts started off poorly for Amd users but AMD have since released drivers that correct this, something you say is impossible in your own article.

Blacklist actually BSODs on the R9 290X and doesn't detect the GPU as a supported model until you patch the game. But yes -- with fully updated drivers and patched-in support, the game does run more evenly. This does not change the fundamental state of the uneven playing field.

My conclusion is that Gameworks gives Nvidia control over AMD's performance in a manner that's ripe for abuse and leaves AMD reliant on either creating additional software frameworks and convincing game companies to support dual code paths by spending an equal amount of money to pay for them OR leaves it attempting to patch game drivers by snooping the code it can read from close analysis of the GW library.

The fact that no *overt* abuse has occurred to date does not mean the system cannot be abused.

Your data and info dont seem to match up with your conclusion. Gameworks enabled = same performance hit on both hardware, conclusion nvidia gameworks is making AO run worse on AMD.
Ghosts HBAO+ driver patched by AMD, conclusion HBAO+ prevents AMD from releasing driver patches.

No. Conclusion: GameWorks makes it far more difficult (nearly impossible in some cases) for AMD to improve their performance in a title . Exactly which combination of vendor vs. developer patches improves perf in Blacklist, I can't say, because the game is in such terrible shape on AMD hardware at version 1.0.
 
That's a difference in approach, not effect.

Mantle: The game will run better on AMD hardware than equal power Nvidia hardware.

GameWorks: The game will run better on Nvida hardware than equal power AMD hardware.

He's saying Mantle does not gimp Nvidia on Direct X, while GameWorks does gimp AMD on Direct X.

Mantle does not interfere with Nvidia performance in normal none Mantle performance. GameWorks does interfere with AMD outside of that.

Through GameWorks Nvidia control AMD performance, its the same thing as AMD controlling Nvidia performance, that would be great, right?
 
Last edited:
Such as the MSAA improvements at driver level factoring into the large performance jumps. You're saying that the performance increase is brute force levelling, but driver optimisation for the 290x is still very much in its infancy.[/quo A 35% performance improvement to multi sampling alone with driver optimisation suggests there were a lot of optimisations missing outside of these alleged vender controlled libraries.

GameWorks is not a monolithic program. If you agree to use NV's HBAO+ library, you use that library. You don't use the library for soft shadows. If you use six GW libraries, than those functions are the functions and libraries AMD can't optimize.

MSAA is not included in any of the GW libraries, which means AMD can still optimize for it. Tessellation is not included in the GW libraries, which means AMD can still optimize for *that* provided the developer is willing to work with the company.

Blacklist, as far as I can tell, only uses the HBAO+ library. Arkham Origins uses multiple libraries. Assassin's Creed IV uses some GW libraries that AO doesn't.

So on this basis there would be no optimisations at all within gameworks for AMD hardware. Do you not think that is a little far fetched?

Nvidia is playing smart with this at introduction. They are not overtly penalizing AMD hardware beyond general removal of AMD's ability to optimize code in the usual fashion. Furthermore, optimization != won't run. NV isn't going to create a GW library that won't run DX11 code on AMD hardware, because that flings the door wide for lawsuits.

Also brings into question what WB refused to implement from AMD... I think given the situation regarding other GW titles it makes more sense to conclude it is probably an internal squabble between WB and AMD. [snip] So basically you're suggesting WB simply said "no." Not sure if you've worked for a sizeable firm before, but that sort of manner isn't really advisable. Feeding people here say is bad practice also... Being frank I think the way you've written parts of the article are deliberately misleading which is why I find it difficult to take seriously.

Before you conclude that I've been deliberately misleading, ask yourself this: Which would've been easier? A hatchet piece that relied on published benchmarks from other websites done using early versions of software that clearly showed massive disparities, or a story that wove together the danger of the GW program in principle, using a clearly identified game as an example of how vendor optimizations can be used to create unfair situations?

If I wanted to do a hatchet job on Nvidia, the answer is: "The first." And as for taking my word on certain things, here's what I can tell you. I've been writing professionally since 2001. This isn't my hobby, it's my job. I don't have my own site; I'm not Anand, but I take my responsibilities to convey accurate, factual information to readers just as seriously as any major site owner.

You are free to disagree with my interpretation of the data, or my weighting of the facts. You are free to conclude that you don't know me, and hey, this is all just hearsay.

So just watch the playing field. Watch and see what happens with future game releases. If I'm wrong, and everything is Business As Usual, then nothing will ever happen. But wrong or right, you have my word -- everything I've written reflects my absolute best understanding of the situation. Nothing has been cherry-picked to create a misleading interpretation of the data.
 
Matt, the article itself shows that AMD released a driver update for ghosts that fixed hbao+
So a GW feature does not in any way prevent AMD from optimising their drivers for GW features

It simply isnt true

The driver AMD released for Blacklist optimizes MSAA, not HBAO.

Blacklists HBAO is an NVIDIA-proprietary algorithm that only runs on their hardware (locked code).
 
Last edited:
My conclusion is that Gameworks gives Nvidia control over AMD's performance in a manner that's ripe for abuse and leaves AMD reliant on either creating additional software frameworks and convincing game companies to support dual code paths by spending an equal amount of money to pay for them OR leaves it attempting to patch game drivers by snooping the code it can read from close analysis of the GW library.

The fact that no *overt* abuse has occurred to date does not mean the system cannot be abused.

So they could potentially do this and they wouldn't end up with another lawsuit if they did?

Does any of the GameWorks effects like Flex/God rays/FlameWorks work on AMD cards?
 
The driver AMD released for Blacklist optimizes MSAA, not HBAO.

Blacklists HBAO is an NVIDIA-proprietary algorithm that only runs on their hardware (locked code).

bzzzt, not according to the article

Blacklist appears to only use GameWorks libraries for its HBAO+ implementation, and early benchmarks of this last game showed a distinct advantage for Nvidia hardware when running in that mode. Later driver updates and a massive set of game patches appear to have resolved these issues

if HBAO+ doesn't run on AMD hardware then how can it affect AMD performance? simple answer is that it can't

the article says that gameworks means that devs and AMD CANNOT improve performance, yet clearly between the two of them they did


I also don't believe for a single second that AMD only have ASSEMBLER tools for their own drivers, that would be retarded. If you write the drivers in C++ then you can log every single call that is made IN TO your own software from outside - if you did C++ at college then you should know this. Even if the tools didn't already exist within C++ (which they do), you would only have to add a line of code to every function to tell it to log every time it was called and what arguments were passed to it and any other info you wanted to log.

If you log that a particular exe or lib calls in to your exe and asks it to perform X then Y then Z and you see that this is causing a performance issue you can capture that and tell your own application to perform A > B > C instead

is it a bit more work than telling the developer to instigate an "IF AMD then ABC" instead of you doing it in your drivers? yes, is it impossible? no

to say that AMD have no ability to modify their own drivers or to see what it is that the game is asking their own drivers to do is utter nonsense

In response to "NVidia are playing smart with this at launch"... what happened with Tomb Raider again? oh yeah, the developer / AMD made a load of changes to TressFX in TR *right* before launch meaning that all the launch reviews showed a GTX680 matching a 7870 and Nvidia then had to rush out a few patches to get back that performance
 
Last edited:
So they could potentially do this and they wouldn't end up with another lawsuit if they did?

Does any of the GameWorks effects like Flex/God rays/FlameWorks work on AMD cards?

They should all work but the will only have been QA'd against nVidia cards. Not sure at what level Flex ties into PhysX so might be some potential issues there but IIRC it has its own specialised solver thats stand alone from PhysX.
 
Back
Top Bottom