• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

If this is "just one guy" who was "backhanded by AMD" to write the article, how come Nvidia boiz kicked up such a fuss over the same thing with that 290X review sample non-story? :D

That was just one guy, who Nvidia gave cards to and set him up for a strafing run on AMD's airstrips. ;)
 
If this is "just one guy" who was "backhanded by AMD" to write the article, how come Nvidia boiz kicked up such a fuss over the same thing with that 290X review sample non-story? :D

That was just one guy, who Nvidia gave cards to and set him up for a strafing run on AMD's airstrips. ;)

What story was that?
 
Firstly,

He's said there that it's impossible, which is an absolute - for AMD to apply a quick after launch fix.

Stay with me. AMD released a driver which fixed multi sampling performance.
13.11 beta


35% improvement when using 8X MSAA. 35% is a huge chunk of performance. Performance which is gained entirely with multi sampling alone. Changes made outside of Gameworks as you've been so keen to point out up until now.

I refer back to my premature drivers comment regarding the above.



Clearly a blanket statement as they have no evidence to back this up. It's very well written so credit where it's due.



So again, they're working on the assumption of the WB scenario leading you on to believe that it's going to possibly reoccur.

You're being lead on because frankly I think it's what you want to hear. It's been fun but time to move on for me.

Even if there are no overt penalties, the fact that AMD cannot optimise for it (which has been my bone of contention since the start) regarding tessellation, HBAO/Ambient Occlusion/multi gpu performance & scaling at driver or dev level is a serious problem. That means even if Nvidia are not nerfing AMD performance they are not allowing them to optimise for it at driver level. This is unprecedented. Although there may be no proof that they have nerfed performance, as we don't know what the locked libraries of GW mean, well not all of them anyway.

GFSDK_GSA
GFSDK_NVDOF_LIB (Depth of Field)
GFSDK_PSM
GFSDK_ShadowLib (Soft shadows)
GFSDK_SSAO (Ambient Occlusion)

Its likely that there is some foul play going on. As a 660 which is a low to mid range part does not suddenly beat a mid to high end part like a 7950 boost unless something fishy is going on. Same with a 770 vs a 290X. Without AA and AMD leveraging the brute force performance of GCN there are clearly engine optimizations at play which tilt things towards Nvidia. Joel says as much.

The R9 290X wins the 8x MSAA tests precisely because once you hammer the GPU *enough*, the heaviest-hitting solution barely manages to eke out a win. That does not change the fact that the results are quite different when we *don't* crank up the MSAA enough to counteract the various engine optimizations that are tilting the game towards Nvidia.

Clearly the GW library loadout is customized and tailored depending on the title. These are the libraries and functions AMD cannot optimize. The fact that AMD can optimize the game and improve performance 35% due to other changes does not change the fact that GW-specific changes are locked out. And I believe the original story makes this distinction quite clear.

Firstly,

He's said there that it's impossible, which is an absolute - for AMD to apply a quick after launch fix.

Stay with me. AMD released a driver which fixed multi sampling performance.
13.11 beta


35% improvement when using 8X MSAA. 35% is a huge chunk of performance. Performance which is gained entirely with multi sampling alone. Changes made outside of Gameworks as you've been so keen to point out up until now.

Also whilst on the subject of poor AA performance:

So not only were AMD able to update their crossfire profile, but also improve AA performance in this title also in a separate engine (GameWorks is engine specific).

If you'd actually read what i had said earlier you'd know the answer to these questions. Here we go again...

AMD said:
We improved AA performance with Batman. GCN is much, much stronger than Kepler in MSAA, so with that feature activated we can overpower the Gameworks advantage and pull ahead.

MSAA is something you do not need game code/dev cooperation to optimize for.
 
Last edited:
Again, rubbish statement. 35% improvement in MSSA alone if anything proves that MSAA pperformance was largely not optimised. To say that kind of improvement is more an indication of foul play instead of it being poor driver optimisation is laughable.

Again maybe you need reminding that FXAA is an NV Technology. It is also not part of GameWorks.

If you look across the board in release notes since the 290 launched there are anti aliasing improvments left right and centre. More so than any other changes.

Whole things a joke which is why its been entertaining.
 
Last edited:
Again, rubbish statement. 35% improvement in MSSA alone if anything proves that MSAA pperformance was largely not optimised. To say that kind of improvement is more an indication of foul play instead of it being poor driver optimisation is laughable.

Again maybe you need reminding that FXAA is an NV Technology. It is also not part of GameWorks.

If you look across the board in release notes since the 290 launched there are anti aliasing improvments left right and centre. More so than any other changes.

Whole things a joke which is why its been entertaining.

I refer you to my previous reply to you earlier in the thread, as i expected this sort of answer from you.

Its not my fault you don't like it or don't believe its true. I can promise you it is. If you don't believe me, fine. Lets just leave it there. :)

EDIT

Regarding your edit

FXAA costs 1-2 fps no more so i don't buy your theory that Nvidia is better at FXAA than AMD. Also FXAA may not be one of the engine optimizations that tilts things towards Nvidia. I never said it was, you did earlier in the thread. I don't know for sure what it is, neither do you. All we do know is something is up because a 660 does not beat a 7950 boost and a 290X does not lose out to a 770 in normal situations.
 
Last edited:
Disclosure: I'm the author of the story being discussed here. I'm jumping in to clarify the question of whether AMD "handed" me this story.

When I attended APU13, AMD told me there were discrepancies in Arkham Origins related to GameWorks, yes. I checked game reviews and noted that the published results for AO were extremely odd.

Once I returned home, I set up test beds using AMD and Nvidia hardware that was previously provided for standard reviews. I tested a Sandy Bridge-E and a Haswell box, and later built a Windows 8.1 system on an Ivy Bridge-E to confirm that I wasn't seeing issues that were tied to Windows 7.

After AMD told me about the problem, I reached out to Nvidia with some questions on GameWorks and attempted to contact WBM. I also tested GW performance in multiple additional titles. AMD did provide some assistance in setting up and using their own GPUPerfStudio monitoring program on Radeon cards, but did *not* provide the results I published. I gathered the raw data myself, on systems I built and configured myself. The content and focus of the article were chosen by myself. It was my idea to test AO directly against AC and I decided to look for problems in ACIV and Splinter Cell.

One question that's been hotly debated here is the impact of FXAA. It should be noted that with FXAA off (no AA whatsoever), the R9 290X runs at 152 FPS vs. 149 FPS for the GTX 770. In other words, it's not that FXAA is running slowly on AMD cards, but that AMD cards run slowly, period. The only way to change this is to turn on MSAA, which hammers the card hard enough for the R9 290X to brute force its way past the GTX 770.

I received absolutely no compensation or consideration of any kind, implied or overt, from AMD. I was paid a standard fee by ExtremeTech for my work. The hardware I used for this comparison was hardware I already had on hand. It is not my property, but the property of my employer.
 
the problem is matt, is that the reason that some of us don't believe what the article says is because we know for a fact that parts of it are not factually correct

developers use libraries all the time, they don't get the source code for them yet both sides are able to optimise drivers for them, you are claiming that without the source code this is impossible, but this simply isn't true

you are wrong, and so is the article, that is the bottom line

tressfx is a library, Nvidia have never been given the source code, yet you say this is fine and dandy, but game works isn't, also despite the fact that with all of the gameworks features disabled the game still shows the vendor bias, it is clearly not gameworks that is responsible
 
the problem is matt, is that the reason that some of us don't believe what the article says is because we know for a fact that parts of it are not factually correct

developers use libraries all the time, they don't get the source code for them yet both sides are able to optimise drivers for them, you are claiming that without the source code this is impossible, but this simply isn't true

you are wrong, and so is the article, that is the bottom line

tressfx is a library, Nvidia have never been given the source code, yet you say this is fine and dandy, but game works isn't, also despite the fact that with all of the gameworks features disabled the game still shows the vendor bias, it is clearly not gameworks that is responsible

Thing is Andy, well respected, non biased tech journalist > your opinion. That's how most of us see it.

Also its been explained to you numerous times the differences between TressFX/Mantle and GameWorks but you choose to ignore them each time. Not going over it again and again. Its been explained to you what the difference is. If you're still struggling to see the difference check previous posts.
 
no one has explained to my why tressfx is any different in an actual technically correct fashion - you told me that tressfx was open because you could download an SDK, which is utter nonsense as an SDK doesn't stop tressfx from being a library that NVidia have no access to other than being able to run it and see what it does (they don't have the source code which is what you seem to think "open" means)

non-biased is a bit of a stretch, he's admitted that AMD put him on to this story in the first place and he's relying entirely on their claim that they can't optimise for gameworks because they haven't been given the source code

I see more people in this thread disagreeing with you that agreeing, so your definition of "most of us" seems to be a bit of stretch as well
 
Last edited:
Tress FX isn't really much of a talking point as it does very little.
The more sane of us can just sit back now I think.

35% multi sampling improvment on GCN which is largely more capable of handling it than Kep. Nothing to do with poor optimisation though guyz.

Too late to go back on this now Matty in the event some more info is given from reliable sources. :D
 
no one has explained to my why tressfx is any different in an actual technically correct fashion - you told me that tressfx was open because you could download an SDK, which is utter nonsense as an SDK doesn't stop tressfx from being a library

The difference is Nvidia retain full control over their performance and optimisation using drivers with TressFX. As i posted earlier unlike GameWorks the dev is able to work with Nvidia to optimise game code and driver performance. Something that is not possible on the closed libraries of GameWorks. At the end of the day TressFX is hair physics using DirectCompute. DirectCompute is an API that supports general-purpose computing on graphics processing units on Microsoft Windows. TressFX is just a software library that uses DirectCompute.

Nvidia’s GameWorks contains libraries that tell the GPU how to render shadows, implement ambient occlusion, or illuminate objects.
In Nvidia’s GameWorks program, though, all the libraries are closed. You can see the files in games like Arkham City or Assassin’s Creed IV — the file names start with the GFSDK prefix. However, developers can’t see into those libraries to analyze or optimize the shader code. Since developers can’t see into the libraries, AMD can’t see into them either — and that makes it nearly impossible to optimize driver code.

I used Extreme Tech's description of GameWorks as he has spent a month researching the article, his words.

Tress FX isn't really much of a talking point as it does very little.
The more sane of us can just sit back now I think.

35% multi sampling improvment on GCN which is largely more capable of handling it than Kep. Nothing to do with poor optimisation though guyz.

Too late to go back on this now Matty in the event some more info is given from reliable sources. :D

I stated the 35% improvement performance early in the thread when Greg mentioned it. You need to keep up. :)

Like it or lump it, optimizing for AA does not require game code/dev cooperation so AMD are able to leverage the superior AA performance of GCN to overpower Kepler and the GameWorks advantage. It was stated previously, numerous times what the optimisation was actually referring to. AA was not part of it.
 
Last edited:
AMD can see exactly what the GW libraries are sending to DX and their driver. And if you think they cannot then put something in place to optimise that or even re-write the shader code in realtime, then you obviously have a very low opinion of their driver team.

How the heck has this got to 21 pages? Do people not know how video drivers work?
 
you haven't actually told me of a difference, they are both libraries, developers and NVidia can't see inside tressfx either

gameworks libraries must have been written using direct compute for them to work on AMD cards, if they used CUDA (like PhysX) then it wouldn't run at all
 
AMD can see exactly what the GW libraries are sending to DX and their driver. And if you think they cannot then put something in place to optimise that or even re-write the shader code in realtime, then you obviously have a very low opinion of their driver team.

How the heck has this got to 21 pages? Do people not know how video drivers work?

exactly +1
 
Seems Joel is a member of Beyond3D. They seem to have a pretty good take on GameWorks. Take a read of the thread. I pulled out a few quotes from Joel as several are what ive been saying from the start only to be told 'im wrong'.

As The Author
Just to clear up a few points:

1). I looked hard for smoking guns. I checked multiple driver versions on both AMD and NV hardware to see if I could find evidence that one vendor took a harder hit than the other when performing a given DX11 task. There aren't any, other than tessellation in AO.

My best understanding, however, is that AMD and NV both typically optimize a title by working with the developer to create best-case HLSL code. With GameWorks, NV controls the HLSL, and the developer either cannot access that code directly or cannot share it with AMD.

Therefore: Even if AMD and NV both take a 10% hit when enabling a given function, NV has been able to optimize the code. AMD cannot.

2). Implementing an AMD-specific code path or library is something that can only be done when a title is in development. Developers cannot finish a game, launch it, and then just turn around and patch in an equivalent AMD library. Or rather, perhaps they technically *could*, but not without a non-trivial amount of time and effort.

If I'm wrong on either of these points, I'd welcome additional information. But even if no smoking gun exists today, this seems to represent a genuine shift in the balance of power between the two vendors. I believe this is different than Mantle because GameWorks is a closed system that prevents AMD from optimizing, whereas Mantle does not prevent NV from optimizing its own DX11 code paths.

We've seen what happens when one vendor controls another vendor's performance. Sabotage. Obfuscation. It's too easy for the company that controls the performance levers to start twisting them in the face of strong competition.

I don't know if Nvidia is banning developers from doing things (they have stated to me that developers are free to implement other solutions if they choose.) I think the larger problem is the difficulty of implementing an entirely separate code path for AMD.

With game costs skyrocketing and multiple game studio closures last year, sure, there are studios like Activision-Blizzard or Bethesda that can write their own tickets and use any tech they want. But smaller devs and studios don't have that kind of negotiating power, and business decisions can still tilt the market. NV holds something like 70% of the total discrete space -- given the other pressures on premium game development, it's not hard to see why suits might see the situation differently than the actual programmers.

But the inability to optimize is what bugs me about this. We need a general market in which AMD, NV, and Intel can all optimize against a title without slamming into game functions they can't touch. AMD presented the problem as significant, and while I acknowledge that they're most definitely a biased party, it still seems a potential problem.


Point of the story.
The point of the article is about more than overtessellation in one title. The point of the article is that closed libraries have the potential to create even more of a walled garden GPU effect.

Right now, a conventional "Gaming Evolved" or "TWIMTBP" title ships out optimized for one vendor but can still be optimized for the other post-launch. GameWorks changes that.

I consider this problematic because we've seen how companies can abuse this kind of power. 12 years ago, Intel began shipping versions of its compiler that refused to optimize for AMD hardware, even though AMD had paid Intel for the right to implement certain SIMD instruction sets. This fact wasn't widely known for years. Instead, people concluded that AMD's implementation of the various SIMD sets must have been sub-optimal, because it didn't benefit from using SSE or SSE2 the way Intel did. Since K8's implementation of SSE2 was only 64-bits wide, the conclusion was that AMD had fumbled the ball in that regard. In reality, Intel's compilers would refuse to create the most advantageous code paths for AMD hardware.

Ordinary consumers don't care about closed libraries any more than they cared about compilers. They care about seeing games run well on the hardware they purchase. And the problem I have with GameWorks, in a nutshell, is that it gives NV control over AMD (and Intel) GPU performance in specific areas. If NV's closed-source libraries are used in all cases, then the temptation to sabotage the competition's performance is huge.

Even if AMD can fight back by creating its own library program, it's still exacerbates a walled-garden approach that doesn't ultimately benefit the end *user.*

That's the point of the article.

The over tessellation and generally slow performance are just an example of how easy it is to create odd results. Even after analyzing the R9 290X's draw calls, it's not clear why the R9 290X is evenly matched against the GTX 770. It just is.

This article combines the tessellation and GameWorks discussion because that's how the story came together. When I began the investigation, I didn't know what I'd find. And I trust AMD's word on much of this partly because, in the course of working with the company, they had the opportunity to lie about smoking guns -- and didn't.

Instead of rushing to judgment with a batch of questionable data relying on old drivers and some of the initial comparisons between AMD and NV in games like Splinter Cell or Assassin's Creed IV, I took the time to chase down performance errata (Splinter Cell's patching process is basically made with wasps and sandpaper). A big expose on how NV had already crippled AMD's performance would have driven a lot more short-term traffic. But that's not what's happening here.

The takeaway isn't "AMD Good, Nvidia Bad." The takeaway is that giving Company A control over Company B's performance is never, ever a good bet for the end-user or any kind of fair competition.

@Andy, seeing as you keep banging on about TressFX, from the very same thread...

V6arFKa.jpg.png

Source
http://beyond3d.com/showthread.php?t=64757
 
Last edited:
The takeaway isn't "AMD Good, Nvidia Bad." The takeaway is that giving Company A control over Company B's performance is never, ever a good bet for the end-user or any kind of fair competition.

I'm saving that one for the Mantle thread.
 
Back
Top Bottom