• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

Done...

8xMSAA.jpg
 
I find the min fps takes a dive on the black screen transition when it changes from the outdoor to indoor scene :| (never drops below 150 during the actual scenes according to fraps) not sure if its a driver thing.

Not even gonna post my FXAA numbers on this GHZ edition it would make too many people cry heh.
 
re:Andybird

I'm going to answer this slightly out of order.

You mention in the article that turning off each of the game works features resulted in the same improvment to fps, so surely it isnt the gameworks libraries that are directly responsible for the performance issues?

Correct. There are two separate issues in play here. Let me break them down for you like this:

GameWorks is not worrying because it directly gives NV a performance advantage in AO. GameWorks is worrying because the long-term impact of giving one company control over another company's performance is intrinsically problematic.

This is why I bring up the Intel /AMD compiler issues from nearly a decade ago. It's an example of a situation in which one company (Intel) exploited a software program to create a situation that strategically blocked its competitor from performing at full potential. Back then, everyone shrugged and said: "Gosh, AMD's SSE2 implementation must suck."

My concern regarding GameWorks has nothing to do with Nvidia. I'd be just as concerned if this was an Intel or an AMD program. I believe strongly that each company needs to be in control of its own optimization process, including the ability to work with developers and improve underlying software.

In this context, Arkham Origins is an example of how a developer can refuse to work with a vendor and, in so doing, create a tilted playing field. End-users will look at the situation and say: "AMD's drivers STILL suck. Great hardware, but you can't trust it."

That statement doesn't really describe the situation -- but it *does* create a perceptual advantage that favors one company over another.

Also, lots of games use external libraries from lots of sources, why is it that you say that AMD are unable to do ANY optimisation on the nvidia ones, yet they seem able to optimise for other 3rd party libraries?

I am not enough of a programmer to speak to the entirety of all game development everywhere. But I'll tell you my best understanding of this process:

Games may use many outside libraries, but not all those libraries are tied to GPU functions. Of the libraries that are tied to GPU functions, AMD and Nvidia may, in some cases, work directly with third-party vendors to optimize the library for their respective hardware.

Nvidia has stated that it does work with developers to optimize libraries like GameWorks for specific games, and that developers can see the code under license. Exactly what the terms of those licenses are, I don't know. What I do know is that the library, in its end form, is an compiled, closed-source DLL (relative to AMD). (My story will be updated to reflect this point as soon as I hit "Post" on this response).

AMD can see when the library is called. AMD can see the library's output. But without code access, AMD can't see what the library is doing. This makes it extremely difficult to impossible to optimize -- the library is a black box function.

You raise the question of how GameWorks pulls power from developers? Developers may be able to see the code within the library (if they are licensed to do so). They may be able to do some optimizations on the library for their own title. But they cannot optimize the library for an AMD GPU, because that's not their job or their expertise. And similarly, they almost certainly can't share any GW code with AMD or give hints on how to optimize.

Some have argued that AMD can get around this issue by creating an equivalent program in which AMD pays developers (either in cash or with expertise) and creates its own library program. That way, a developer building a game would have two sets of libraries: Call them GFSDK_SoftShadow and GCNSDK_SoftShadow. That's a fair point, as far as it goes -- yes, AMD could do this, and yes, the end result would be a "balanced" playing field of sorts, provided that AMD could afford to match Nvidia game-for-game and developer-for-developer.

My point here is that I do not feel this benefits the game developer or the end-user as whole.

With any library, each vendor can see what commands are being sent from DirectX to the drivers and then rewrite the drivers to reinterpret what happens on the GPU, this happens all the time with 3rd party libraries, my understanding is that tressfx is a library for example, so why is it that AMD cannot do any optimisation at all on the gameworks fucntions that work on AMD cards? Do you actualy have any direct experience of coding or are you relying on what AMD have told you on this?

The source code for TressFX 2.0 is available for download, including the shaders.

I had some basic and intermediate courses in C++ in college and have done some game modding in the past but would not call myself either a programmer or game developer. In researching this story, I talked to multiple sources from multiple companies, both to flesh out my understanding of the role of libraries and GPU optimizations for gaming in general, and to explore the issues at play here in particular.

I can't say more than that.

The last point I'd make is that this isn't about Nvidia vs. AMD to me. Nvidia makes a damn good graphics card. They've done tremendous amounts of good for gaming in the past 13 years. Nvidia pioneered GPGPU work. They built the first mainstream programmable GPU. PhysX and 3D Vision might be niche features, but I have enjoyed PhysX in every (good) game it shipped with, I've liked 3D Vision, and my significant other uses a NV card + 3D-capable monitor because she likes watching movies in 3D.

In the 12 years since I regretfully gave up 3dfx I've been an NV customer in my own personal box more than an AMD customer, partly *because* they offered better driver profiles and even third-party tools were stronger. If I had to buy a brand-new GPU tomorrow, I'm not sure which company I'd pick.

GameWorks is risky because we've already seen what happens when one company can use a software position to reinforce a consumer perception of substandard hardware. By the time we had proof that Intel's compiler refused to optimize for AMD CPUs, even though AMD had paid Intel for the right to use those optimizations, the damage was done.

That needs to not happen again.
 
Last edited:
@ DigiHound

The issue for me is if i pay a developer for a product, i don't expect that product to penalise me because of what brand hardware i have.

I expect the same level of support as other people paying the same money for the same product.
 
Glad to see it cleared up that the Devs' (if licensed) can see what the code is as it was suggested otherwise here, interesting point you make that either side could have its own library if it chose to invest and do so.
Not rally sure its upto the the game devs to optimise for either side Really thats upto IMO who either makes the hardware your running and i appreciate for my nvidia system they are doing so, and annoyed for my amd system AMD choose not to
Thanks also for joining up and expressing your opinions first hand
 
Not rally sure its upto the the game devs to optimise for either side Really thats upto IMO who either makes the hardware your running and i appreciate for my nvidia system they are doing so, and annoyed for my amd system AMD choose not to
Just to clarify on this point. It's not that AMD choose not to. It's been highlighted that AMD offered dev support and submitted code which was for unknown reasons rejected by the developer. It has also covered that AMD has worked within it's drivers to improve performance on the finished game build, such as the MSAA performance.

What is being highlighted is that the intricacies of the GameWorks library is obscured to AMD and not nVidia (naturally as they are the creators and presumably implemented in a performance efficient way-as would be the case for both involved gpu companies traditionally if their help/input was taken up by the devs).
This potentially makes it that much harder to extract performance with the possibility it could limit achievable performance optimisation also, regardless of any intent to do so or not. The problem is then if enough games use GameWorks and to a greater extent, small efficiency differences could amount to something more significant and under such circumstance unresolvable in full. Especially if the dev does not care to take the extra time and work to add alternate code from AMD support, because of course, the game will still run.
 
Last edited:
I'm curious as to the mention of encrypted dlls, its not abnormal with game development to have dlls the dev can't see into but a different story if nVidias has taken steps to even obfuscate debugging via symbols, stack profiling, etc.
 
Last edited:
Rroff, that was a typo on my part. I've just corrected the post. The DLLs are not encrypted. That doesn't mean AMD can just dump them and optimize accordingly, but they aren't encrypted.

(DLLs included as part of driver stacks and designed to be loaded JIT can be encrypted, but that's a different story).
 
Some have argued that AMD can get around this issue by creating an equivalent program in which AMD pays developers (either in cash or with expertise) and creates its own library program. That way, a developer building a game would have two sets of libraries: Call them GFSDK_SoftShadow and GCNSDK_SoftShadow. That's a fair point, as far as it goes -- yes, AMD could do this, and yes, the end result would be a "balanced" playing field of sorts, provided that AMD could afford to match Nvidia game-for-game and developer-for-developer.

From the blogs author^^
yes the Amd code sent in was declined For one game for what ever reason(Which none of us know).
I havent seen any other games using gameworks also saying this the case.
I assumed overall this debate is about gameworks as the thread indicates not just one game
 
yes the Amd code sent in was declined For one game for what ever reason(Which none of us know).
I havent seen any other games using gameworks also saying this the case.
I assumed overall this debate is about gameworks as the thread indicates not just one game

This is the part I'm most interested/possibly concerned about, one of the biggest sources of possible less than optimal performance on AMD cards in AO is tessellation which I can't see being tied deeply into gameworks but still they seem to have turned the optimisation down. Without knowing more about that aspect I'm a bit cautious of getting to involved with the rest of the subject.
 
I don't blame people for being skeptical on this one. It's inside baseball. It's he-said/she-said. And even if that wasn't the case, BAO is still one game. If it was so one-sided that it only ran on Nvidia graphics cards, it would still be just one game.

I understand that view. Obviously I disagreed with it ;), but I understand it.
 
As I mentioned, the whole point of drivers is that they intercept the commands from DX and then tell the GPU what to do, your article is saying that AMD have no ability to look at these calls and then rewrite their drivers to suit, would this be easier with the source code? Yes, essential? No.

Again, as you point out, if you turn off the gameworks features and the performance increase/decrease is the same, how can you draw the conclusion that it is a gameworks feature that is causing the problem?

I see this as nvidia being damned if they do (physx, txaa) and damned if they don't (hbao+, pcss)

Other gameworks titles dont show the same bias. You even mention that the hbao implementation in ghosts started off poorly for Amd users but AMD have since released drivers that correct this, something you say is impossible in your own article.

Your data and info dont seem to match up with your conclusion. Gameworks enabled = same performance hit on both hardware, conclusion nvidia gameworks is making AO run worse on AMD. Ghosts HBAO+ driver patched by AMD, conclusion HBAO+ prevents AMD from releasing driver patches.

I agree that devs making their game run deliberately bad on one vendors hardware is a bad thing, gamers should boycott games that do so, however to lay the blame entirely at Gameworks feet, I don't think the data really supports that.
 
Last edited:
I don't blame people for being skeptical on this one. It's inside baseball. It's he-said/she-said. And even if that wasn't the case, BAO is still one game. If it was so one-sided that it only ran on Nvidia graphics cards, it would still be just one game.

I understand that view. Obviously I disagreed with it ;), but I understand it.

I can understand why you've posted it but it isn't much to go on. And there are a lot of points that don't add up or that you don't take in to account. Such as the MSAA improvements at driver level factoring into the large performance jumps. You're saying that the performance increase is brute force levelling, but driver optimisation for the 290x is still very much in its infancy. A 35% performance improvement to multi sampling alone with driver optimisation suggests there were a lot of optimisations missing outside of these alleged vender controlled libraries. So it's a fair assumption to assume other aspects are also not up to scratch.


You raise the question of how GameWorks pulls power from developers? Developers may be able to see the code within the library (if they are licensed to do so). They may be able to do some optimizations on the library for their own title. But they cannot optimize the library for an AMD GPU, because that's not their job or their expertise. And similarly, they almost certainly can't share any GW code with AMD or give hints on how to optimize.

So on this basis there would be no optimisations at all within gameworks for AMD hardware. Do you not think that is a little far fetched? Also brings into question what WB refused to implement from AMD.

I think given the situation regarding other GW titles it makes more sense to conclude it is probably an internal squabble between WB and AMD.

Also regardless I believe that AMD do in fact have access to these libraries, this one example seems to be restricted to WB Studios, and you've clearly not been told at any point (unless I'm missing something) why AMDs amendments were rejected. So basically you're suggesting WB simply said "no." Not sure if you've worked for a sizeable firm before, but that sort of manner isn't really advisable. Feeding people here say is bad practice also.

Being frank I think the way you've written parts of the article are deliberately misleading which is why I find it difficult to take seriously. If something is skeptical there is normally good reason for it. If by any chance you're in Vegas this Sunday maybe ask Nvidia about the situation. I'm sure they'd love that.
 
Last edited:
Thanks again.:)

BAO High FXAA

290X@1000/5000MHz([email protected]) -17% min/-23% avg/-22% max, slower than a 780@1000/6000MHz([email protected])

BAO 8XMSAA

290X +9% min/+12% avg/-18% max compared to the 780.

:confused:

What the hell? How can there by 35% difference between the averages of a 290X and a 780 depending on if AA is used or not? Something is not right there.

EDIT

At least it seems AMD were telling the truth about the 35% performance increase when using AA via that driver.
 
The gap on msaa8x would traditionally be larger/smaller (depending what side you look at it from) as the 780 is essentially underclocked, you'd be extremely unlucky to get a 780 that wouldn't boost well north of 1000mhz
 
The gap on msaa8x would traditionally be larger/smaller (depending what side you look at it from) as the 780 is essentially underclocked, you'd be extremely unlucky to get a 780 that wouldn't boost well north of 1000mhz

The ACX EVGA SC GTX boosts to 900mhz. 1000mhz onward would require fan profile adjustment or a boost disabled BIOS. Easily achieved but on ref design 1000mhz is far from underclocked.
 
Last edited:
I don't blame people for being skeptical on this one. It's inside baseball. It's he-said/she-said. And even if that wasn't the case, BAO is still one game. If it was so one-sided that it only ran on Nvidia graphics cards, it would still be just one game.

I understand that view. Obviously I disagreed with it ;), but I understand it.
The whole "locking down on PhysX" approach that Nvidia pulled when a AMD GPU is detected despite having a Nvidia card as dedicated PhysX card is already enough to cause for concern on how the "Closed Library" will be handled (or abused). Anyone that say they are not worried and it's not gonna be an issue (even with the PhysX lock down approach considered), I don't know if they are naive, or have blind faith in Nvidia...
 
The ACX EVGA SC GTX boosts to 900mhz. 1000mhz onward would require fan profile adjustment or a boost disabled BIOS. Easily achieved but on ref design 1000mhz is far from underclocked.

According to the ocuk store page, the sc acx runs a base at 947mhz with a minimum core boost of 1020mhz, you'd even be unlucky to have one of those only boosting to 1020 imo. Both of my phantoms are rated at 980/1033, both out of the box boost to 1110mhz, though currently on a skynet bios to get rid of boost as I'm not a fan :)
 
Back
Top Bottom