• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

LtMatt,

I don't think you're wrong about Blacklist's HBAO+ performance. But let's use this as a jumping off point.

In the original TechSpot coverage of Splinter Cell, we see the following performance characteristics at 1920x1080 with HBAO+ enabled vs. disabled:
http://static.techspot.com/articles-info/706/bench/Ultra_02.png
http://static.techspot.com/articles-info/706/bench/Ultra_HBAOoff_02.png

I'm only concerned with the GTX 770, because that's the card I tested here. Performance goes from 70 FPS (HBAO+ on) to 73 FPS (Field AO).

The Radeon 7970 (no 290X, wasn't launched yet) goes from 64 FPS (HBAO+ on) to 78 FPS (HBAO+ Off). Clearly the 7970 is taking a heavier hit at launch. With Field AO, it's the second-fastest card. With HBAO+ On, it's the fifth fastest card. What we care about, in this case, is the % hit. The GTX 770 takes a 4.2% hit from using HBAO+. The 7970 is taking an 18% hit.

Here's what I saw when I benchmarked a fully patched sequence of the game with the latest drivers for both AMD and NV: (V-Sync Off, all details maxed, FXAA). Figures are HBAO+ on vs. Field AO.

GTX 770: 80 FPS / 97.5 FPS.
R9 290X: 94 FPS / 113 FPS.

What this tells us is that the performance characteristics of the Nvidia card have changed. The GTX 770 now gives up 22% performance when AO is enabled, while TS had it logged at 4.2%. But despite that hit, the GTX 770 is now running 14.2% faster than it was previously.

I couldn't benchmark the R9 290X in the original 1.0 version of the game because the 1.0 version BSODs almost immediately upon attempting to play with that GPU installed. What we see in the 1.03 version is that AMD still takes virtually the same size hit on the R9 290X in the 1.03 version as it took on the 1.00 version = about 17%.

The R9 290X's performance in my test is about 46% faster than the 7970 that TechSpot logged. That's significantly larger than the average gap between those two cards, which is typically 28% - 35%. We can therefore conclude that yes, AMD optimized some driver functions that sped the GPU up in other ways. (I assume, therefore, that the Radeon 7970, if benchmarked in 1.03, would be faster than in 1.0.)

But did AMD specifically optimize the HBAO+ function? I'm aware of no evidence that says they did.

The biggest change related to HBAO+ performance in Splinter Cell: Blacklist is that the GTX 770 appears to take a much larger hit when enabling that mode in 1.03 than it did in 1.0. Obviously this depends on TS's figures being representative of the final game, etc. But what it suggests is that other factors on the *Nvidia* side of the equation were limiting the GTX 770's performance artificially and therefore hiding the penalty of activating HBAO+ in the 1.0 version of the game.

Splinter Cell: Blacklist does not demonstrate that AMD optimized the HBAO+ function. It also does not demonstrate that Nvidia has a performance advantage due to GW either, once the game is fully patched. The gap between the GTX 770 and the R9 290X is 18% -- smaller than the statistical average of about 24% that I derived from published results at AnandTech, Tech Report, and my own reviews for PCM and ET, but still within an acceptable delta given differences in game engines and optimization levels.

The fact that I had to write about eight paragraphs explaining the game results is why this wasn't dumped into the original article. :P
 
Last edited:
LtMatt,

I don't think you're wrong about Blacklist's HBAO+ performance. But let's use this as a jumping off point.

In the original TechSpot coverage of Splinter Cell, we see the following performance characteristics at 1920x1080 with HBAO+ enabled vs. disabled:
http://static.techspot.com/articles-info/706/bench/Ultra_02.png
http://static.techspot.com/articles-info/706/bench/Ultra_HBAOoff_02.png

I'm only concerned with the GTX 770, because that's the card I tested here. Performance goes from 70 FPS (HBAO+ on) to 73 FPS (Field AO).

The Radeon 7970 (no 290X, wasn't launched yet) goes from 64 FPS (HBAO+ on) to 78 FPS (HBAO+ Off). Clearly the 7970 is taking a heavier hit at launch. With Field AO, it's the second-fastest card. With HBAO+ On, it's the fifth fastest card. What we care about, in this case, is the % hit. The GTX 770 takes a 4.2% hit from using HBAO+. The 7970 is taking an 18% hit.

Here's what I saw when I benchmarked a fully patched sequence of the game with the latest drivers for both AMD and NV: (V-Sync Off, all details maxed, FXAA). Figures are HBAO+ on vs. Field AO.

GTX 770: 80 FPS / 97.5 FPS.
R9 290X: 94 FPS / 113 FPS.

What this tells us is that the performance characteristics of both video cards have changed. Both cards are far faster than they were in the Tech Spot test. The GTX 770 now gives up 22% performance when AO is enabled, while TS had it logged at 4.2%. But despite that hit, the GTX 770 is now running 14.2% faster than it was previously.

I couldn't benchmark the R9 290X in the original 1.0 version of the game because the 1.0 version BSODs almost immediately upon attempting to play with that GPU installed. What we see in the 1.03 version is that AMD still takes virtually the same size hit on the R9 290X in the 1.03 version as it took on the 1.00 version = about 17%.

The R9 290X's performance in my test is about 46% faster than the 7970 that TechSpot logged. That's significantly larger than the average gap between those two cards, which is typically 28% - 35%. We can therefore conclude that yes, AMD optimized some driver functions that sped the GPU up in other ways. But did AMD specifically optimize the HBAO+ function? I'm aware of no evidence that says they did.

The biggest change related to HBAO+ performance in Splinter Cell: Blacklist is that the GTX 770 appears to take a much larger hit when enabling that mode in 1.03 than it did in 1.0. Obviously this depends on TS's figures being representative of the final game, etc. But what it suggests is that other factors on the *Nvidia* side of the equation were limiting the GTX 770's performance artificially and therefore hiding the penalty of activating HBAO+ in the 1.0 version of the game.

Splinter Cell: Blacklist does not demonstrate that AMD optimized the HBAO+ function.

Interesting, thats a lot of data you have there. Thanks.
 
I think at this stage we'd just settle for Nvidia not harming our performance and or blocking our performance/driver optimizations. :)

Who really knows whats going on, I for one don't have a clue as no official comments on this story from amd or nvidia or gameworks have been seen.

The real question is if its true and gameworks were ''' lets say told by nvidia to hamper amd performance on this game then that is bad news with games that use gameworks libraries for any image enhancements and/or performance on amd cards.

I do not think anyone agrees with it in principal ( if true ) , its just that no one knows what is going on for sure between the companies.
 
Who really knows whats going on, I for one don't have a clue as no official comments on this story from amd or nvidia or gameworks have been seen.

The real question is if its true and gameworks were ''' lets say told by nvidia to hamper amd performance on this game then that is bad news with games that use gameworks libraries for any image enhancements and/or performance on amd cards.

I do not think anyone agrees with it in principal ( if true ) , its just that no one knows what is going on for sure between the companies.

Yeah can't argue with that viewpoint Ian. :)
 
Who really knows whats going on, I for one don't have a clue as no official comments on this story from amd or nvidia or gameworks have been seen.

The real question is if its true and gameworks were ''' lets say told by nvidia to hamper amd performance on this game then that is bad news with games that use gameworks libraries for any image enhancements and/or performance on amd cards.

I do not think anyone agrees with it in principal ( if true ) , its just that no one knows what is going on for sure between the companies.

But it is showing that nVidia are not in any way hampering performance but the worry for EVERYONE is they could.... The problem with that is and Joel knows it as well, if they did, they would end up in court. Sure Tessellation is poo on AMD cards but that doesn't mean nVidia users should miss out in a TWIMTBP title because AMD decided not to improve on something since the 6 series.
 
okay, basically, here is the thing

assuming the game developer gives AMD the source code for the game (not the libraries), AMD can set up a profiler and run the game in debug

they can also run their own drivers in debug with a profiler

they can see when the game calls the NV libary, they can then see each and every time DX calls their driver, they get reports on how long each and every line of the games code and their own code takes to run, presumably it would be trivial for them to have an nvidia rig side by side for comparison (so that they can see if the library runs quicker on nvidia hardware than it should in comparison of like for like hardware)

if they see that the library call takes longer than they think it should on AMD hardware they can see each and every line of their own C++ code that that library is calling (via DX), from there they can work out if a different action within the drivers would be more efficient and code that in instead

the only thing they can't optimise for is if Nvidia deliberately put in "IF AMDGPU then deliberately do something pointless to waste time"... there is however no evidence that they have done that to date and if they updated to a new version that DID do this then it would be really obvious and the developer would absolutely not have to deploy that new version and AMD would have a "smoking gun" to go to a journalist with

DX itself is closed source and is a "black box" as far as source code goes, so being able to see what it is that DX does to their drivers is one of the key ways that driver developers optimise their driver code

I really have a problem with the headline (because there is no direct link between the game works libraries and performance issues) and the assertion that AMD cannot do ANY optimisation because of a "closed library"

Nvidia can't win - if they make something closed then they get criticised for making it closed, and if they allow it to run on any DX hardware then they get criticised for not handing out the source code "because nvidia are inherently evil and will try to kill AMD performance, you just know they will"

I'm done as far as this point goes, if anyone doesn't believe me then feel free to do a 3 day C++ course in which they should teach you how to use, create and debug libraries
 
Last edited:
the only thing they can't optimise for is if Nvidia deliberately put in "IF AMDGPU then deliberately do something pointless to waste time"... there is however no evidence that they have done that to date and if they updated to a new version that DID do this then it would be really obvious and the developer would absolutely not have to deploy that new version and AMD would have a "smoking gun" to go to a journalist with.

Interesting you bring this up. Can't comment on it, though.

I really have a problem with the headline (because there is no direct link between the game works libraries and performance issues) and the assertion that AMD cannot do ANY optimisation because of a "closed library"

I'm sorry if you felt this was unclear. AMD can't optimize in the traditional manner (i.e., through sharing code with the developer) for GW libraries. The potential "tilt" of the playing field is directly proportional to the number of libraries.

Nvidia can't win - if they make something closed then they get criticised for making it closed, and if they allow it to run on any DX hardware then they get criticised for not handing out the source code "because nvidia are inherently evil and will try to kill AMD performance, you just know they will."

I cannot speak for the other participants in this thread, but I believe I have made it extremely clear, at length, that Nvidia is not "evil." Nvidia is a company that is trying to secure its lock on the PC space. It has, in my opinion, gone too far in this specific instance. That does not change the many advantages Nvidia has brought to PC gaming, the general strength of its driver development program, its commitment to enthusiasts, and its excellent work in building many years of top-notch products.

I have no problem with PhysX, with CUDA, or with G-Sync. I have no problem with 3D Vision (or with Eyefinity when AMD was pushing it and NV wasn't). I have no problem with Optix, which I've actually written about for ET.

If anyone reads my article and comes away thinking it's a binary between AMD good / NV bad, then they haven't actually understood the point.
 
The headline is pretty clear, as is "which means AMD can’t optimize its own drivers"
What you mean to say is that they cant optimise the games code, they totally can optimise their own drivers, without ever going near assembler

My responses havent just been aimed directly at you mate, there those in this thread that very much do see this as AMD good Nvidia evil

Hence all the arguments, in whole I understand and get what you were trying to do with the article, however its been misinterpreted as an nvidia hatchet piece
The headline really doesn't help in that respect, the problem is youve made some sweeping generalisations in the conclusions that are not clearly supported by the data

I am interested to know though, would you prefer it if nvidia made all these new features Nvidia locked, or do you see it as potentially good that nvidia is at least giving AMD users the opportunity to enable these features if they choose to do so?
I see no problem with GW if the end user can choose to disable all of the extra GW features, then the GW libraries are not being used and CANNOT be responsible for any performance difference. If the end user is getting good frame rates and chooses to enable the features then at least they have that option.
 
Last edited:
AMD put you onto this Joel (nVidia did the same with the throttling 290X's and TheTechReport) and whatever way you dress it up, it is a damning of WB Montreal and nVidia (you only need to go through all the comments on your ET article to know this). nVidia, in your own words have done nothing wrong that you know of. WB Montreal for some reason didn't accept the code that was sent to them from AMD and again, I can't see anything they have done wrong.

TheTechReport were lead by nVidia and you were lead by AMD....
 
@ andybired123

No one has said Nvidia are "evil" its people with Nvidia to close to their heart taking offence to anyone talking anything critical of Nvidia.
Your just far to emotional and are taking this thread as the whole world conspiring against Nvidia.
 
I am interested to know though, would you prefer it if nvidia made all these new features Nvidia locked, or do you see it as potentially good that nvidia is at least giving AMD users the opportunity to enable these features if they choose to do so?

Interesting question. I would prefer that all games give equal optimization footing to all companies, including Intel, where the decisions for who gets to see the library code are made solely by the developer, but all companies are free to create their own specific implementations of optimized functions when it makes sense to do so.

So, in my hypothetical example, there's a library called DX11_AO that anyone can optimize and use, and the developer can share that library with anyone -- Intel, AMD, NV. There's also a library, maintained and written by Nvidia, called NV_AO. Maybe there's an Intel_AO and a GCN_AO as well. Maybe not. Maybe NV is the only company to do its own custom implementation of an AO library. That's fine to me, as long as there's a code path out there that other companies can use and integrate without penalty or massive amounts of additional work, and provided the license terms on our hypothetical NV_AO library don't preclude the developer including other libraries (GW does not prevent this).

Now, should NV make features available on non-NV cards? That's a really interesting question, and NV's own track record is a bit mixed. You can't use TXAA on a non-NV GPU, but you can enable PhysX, provided your CPU is fast enough to use it.

I think common features of a core API like DX11 should be common *to* the API and practically available for optimization in an even-handed manner. I don't like the idea of games becoming too chained to any single GPU vendor's implementation of software. But within that framework, I think GPU vendors absolutely should have the right to implement their own specialized adaptation of a given function.

I consider PhysX a specialized, vendor-specific implementation of a physics engine. There's an argument for letting that code run on any CPU (expose more people to the idea!). There's an argument for not letting it run (reserve feature for loyal NV customers!). In general, I favor running it over not running it. I also think NV should let customers who own an NV GPU and want to use a hybrid configuration to run PhysX on those cards. But NV has obviously disagreed with this last, and hey -- that's their call.
 
No one is being forced to use it,, if a Dev choses to use it its there choice which they are allowed to do.
this was a interesting read until, so much changed, cant optimize at all began can only optimize for some stuff ,, then well it cant optimize in the traditional way it takes more work,and the threat became a Potential threat, REally it seems to of turned from a informative(possibly) debate into a he said she said with some people just spewing the same stuff without reading what was said and thinking logical about it
Bottom line WB Refused amd code for one game,, for what reason we dont know
Rest is hersay and potential future implications,,
 
@ andybired123

No one has said Nvidia are "evil" its people with Nvidia to close to their heart taking offence to anyone talking anything critical of Nvidia.
Your just far to emotional and are taking this thread as the whole world conspiring against Nvidia.

I see them both as companies out to make money. I view both of them as having an agenda (to make money) and view actions by both companies as a means to that end. They both work with developers to "add value" for their own products sometimes at the expense of their competitors products, they both do this, to complain or criticise one for doing it and white knight for the other one while doing it is hypocritical.

They have both done questionable things (including putting journalists up to writing stories that are critical of their competitors products or tools).

The article is not factually correct on several points - even the headline states that GameWorks usurps power from developers - it said this because the author was of the impression that the developers had no access to the source code of the GW libraries - this has been proven to be false, yet the headline remains unchanged.

It is also factually incorrect to state that AMD have no ability to optimise their own drivers, yet this assertion remains unchanged in the article.

Here is another thing;
If Nvidia decides to stop supporting older GPUs in a future release, game developers won’t be able to implement their own solutions without starting from scratch and building a new version of the library from the ground up. And while we acknowledge that current Gameworks titles implement no overt AMD penalties, developers who rely on that fact in the future may discover that their games run unexplainably poorly on AMD hardware with no insight into why.

If nvidia did decide to stop supporting GPU's in a future release, this would not affect any pre-existing game - the version of the lib being used wouldn't be changed unless the DEVELOPER patched the game to include the new version
also, as mentioned before, all developers will be profiling their own software so there would never be an "unexplainably poorly on AMD", as they profiled they would instantly be able to see that the nvidia lib was running much slower than it used to, it would be an instant and obvious nail in nvidia's-chances-for-devs-to-ever-trust-them coffin
 
Last edited:
I see them both as companies out to make money. I view both of them as having an agenda (to make money) and view actions by both companies as a means to that end. They both work with developers to "add value" for their own products sometimes at the expense of their competitors products, they both do this, to complain or criticise one for doing it and white knight for the other one while doing it is hypocritical.

Sure. No objection.

They have both done questionable things (including putting journalists up to writing stories that are critical of their competitors products or tools).

Again, sure. The stories that journalists feel are unfair or blatantly one-sided don't get written. The stories that we feel *do* point out fair points or issues do get written.

The article is not factually correct on several points - even the headline states that GameWorks usurps power from developers - it said this because the author was of the impression that the developers had no access to the source code of the GW libraries - this has been proven to be false, yet the headline remains unchanged.

The developer point *has* been changed. The headline hasn't changed because the usurpation of power is the freedom to share HLSL code with AMD. That's the "freedom" being explicitly referenced.

Developers can, under license, get access to code. I can't tell you the reason why my impression of that situation was otherwise. But the developers in question cannot share that code with AMD to optimize it.

It is also factually incorrect to state that AMD have no ability to optimise their own drivers, yet this assertion remains unchanged in the article.

AMD cannot optimize the source code of the library. They cannot replace that library with an AMD equivalent without working with a developer from Day 1. They can maybe perform some deep level optimization using other methods. My investigation of this process indicates that it's both different from the current method of working with a vendor to optimize a product, is far more difficult, and takes a great deal more time.

Let's say you and I show up to run a foot race. Then I claim that you have to wear a 250 lb suit of full plate armor, while I get to wear jogging clothes. I claim this is fair, because I haven't cut off your feet. Technically, you could still run a race Practically, you can't. And no one would call that competition fair, even if I'm fat and out of shape.



Here is another thing;


If nvidia did decide to stop supporting GPU's in a future release, this would not affect any pre-existing game - the version of the lib being used wouldn't be changed unless the DEVELOPER patched the game to include the new version.

True. This point assumes a game actively in development, not a previously shipped product.

as mentioned before, all developers will be profiling their own software so there would never be an "unexplainably poorly on AMD", as they profiled they would instantly be able to see that the nvidia lib was running much slower than it used to, it would be an instant and obvious nail in nvidia's-chances-for-devs-to-ever-trust-them coffin

Nvidia has said that developers can see source. That does not mean all developers do see source. The distinction is meaningful and significant in this context. And no, I can't cite you a source on it.
 
Last edited:
Nvidia has said that developers can see source. That does not mean all developers do see source. The distinction is meaningful and significant in this context. And no, I can't cite you a source on it.
Context is hard to work out without knowing the full story from which this quote was pulled from..
Yes we realise you may have confidential sources but you cant really go on about how meaningful and significant it is without the full thing to look at as we cant see said implied context
They cannot replace that library with an AMD equivalent without working with a developer from Day 1
Maybe they should invest with working with devs from day 1 like Nvidia did?
Let's say you and I show up to run a foot race. Then I claim that you have to wear a 250 lb suit of full plate armor, while I get to wear jogging clothes.
Really? its more case of Nvidia in this case having run the course before due to investing time and $$ with the organiser from day 1
As you yourself have said They can optimise but in a more inefficient time consuming
way, This doesnt stop them from doing so ,it just means they take more time to do so which they should for there consumers
 
Back
Top Bottom