• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

SLI support was poor before the first patch also.

Also whilst on the subject AC4 has stuttering issues under SLi. Not sure who to complain to about that one though. :rolleyes:

Edit: eyetrip, lol.

God damn it, need someone to point fingers at!

Have you tried playing it since on crossfire? Also SLI support seemed fine according to Guru3d.

Unless im mistaken AC4 was created by Ubisoft, wasn't it? Stuttering is common on their games. See Farcry 3 for reference. Nothing to do with this though.

I've not played this game (AC4) on crossfire so cannot comment, just like you've not played Ghosts on crossfire, or batman or any GameWorks titles for that matter.

Using the same settings that show a 770 to be faster than a 290X with FXAA.

Its perfectly normal. Just like a 660 being faster than a 7950. Nothing to see here. Its perfectly normal for Nvidia to be able to optimize but at the same time block AMD and dev cooperation (if it exists - not in the case of WB) from providing key optimisations for AMD. Perfectly normal. The GameWorks effect.

http://www.dsogaming.com/pc-performance-analyses/batman-arkham-origins-pc-performance-analysis/


Batman AO SLi performance not up to scratch either. No to imply that makes Crossfire performance acceptable, but it was both vendors with technical difficulties when the game was launched.

AO is just a very poor example, it's buggy. As has been said already multiple times, where exactly is the line drawn.

SO Nvidia were able to optimise performance after launch and bring it up to scratch with the developer? Strange that AMD could not do that in full. Mind you, even if the dev was willing nothing could have been done to optimise the parts in question.

AMD is no longer in control of its own performance. While GameWorks doesn’t technically lock vendors into Nvidia solutions, a developer that wanted to support both companies equally would have to work with AMD and Nvidia from the beginning of the development cycle to create a vendor-specific code path. It’s impossible for AMD to provide a quick after-launch fix.

This kind of maneuver ultimately hurts developers in the guise of helping them. Even if the developers at Ubisoft or WB Montreal wanted to help AMD improve its performance, they can’t.
 
Last edited:
What does it matter who the publisher/dev was?

Yes, FC3 was also a mess, but like you say, that has nothing to do with this, so why bring it up?

We are talking about Gamerworks titles (all 2.5 of them), and one is rather broken on the multi-card for the developer of Gameworks - Nvidia.

I asked in the gaming thread the other day, and while I didn't get many responses, there is at least one user who is happy as Larry with performance on his 7990 (which he wouldn't be if it was the stuttering mess it is on Nvidia SLI).

I need to go back and check if anyone else responded.

I'll ask the same for Batman, as I'd hate to make assumptions and suck numbers out of my thumb (either in support of, or against my stance on the matter)

Because Ubisoft made a hash of multi gpu smoothness and they also made farcry 3 and ac4. I brought it up because frosty said AC4 was a mess on SLI.

There are four GW titles so far, that i know of. Maybe 3.5 as Splinter Cell only uses the HBAO/Ambient Occlusion of GameWorks.

I'd be interested to see benchmarks with HBAO on and off and its effect on AMD cards vs Nvidia cards. AMD are locked out from optimisations for HBAO at dev and driver level.

So you've not played any of the effected games yet you feel you're able to comment from a users perspective on AMD hardware? Further more disregarding performance results from 290X users within this thread?

Matt, honestly. You're taking this one man's word as gospel when you've not even played the games yourself. Can you not see the problem with that?

I have played Ghosts so i can report on that accurately. I've played all the COD games, sadly. I've not played Batman, or AC4, but im not specifically referring to the crossfire performance of those titles, as ive not played them.
 
I'll wait now and see if there are any further updates on this before posting more. This is basically the situation we find ourselves in at the moment.

This is what it boils down to when it comes to level of understanding of GameWorks and its implications. First look to see what gpu the poster has. Then apply the relevant picture. :p


Nvidia GPU
hfhkxFJ.jpg



AMD GPU
zx5rnGn.jpg


May as well end on a joke.
 
so turning on or off each of the gameworks features - the ones that actually call these libraries, the performance hit or improvement is the same on each set of hardware
if you turn off all of the gameworks features, so the libraries are not being called at all, and the game still favours NVidia, then clearly the fault is with the game, not with gameworks

How do you turn off the GameWorks 'feature' that sees a 660 beating a 7950 boost? Or a 290X losing to a 770? I'm pretty sure you can't as only Nvidia can optimise for that part, unlike amd or the dev.

You have any source for that? it's just Thracks stated Mantle required GCN architecture and he usually knows his stuff.

Yes i watched the Mantle Keynotes delivered by Johann Andersson at APU13.

1CjuPjZ.jpg

During AMD’s Developer Summit, Johan Andersson confirmed that Mantle is not tied to AMD’s GCN architecture, meaning that there isn’t such requirement.

This also means that other vendors will be able to support Mantle by not altering the architecture of their GPUs.

Source
http://www.slideshare.net/DevCentralAMD/keynote-johan-andersson
http://www.dsogaming.com/news/amds-mantle-does-not-require-gpus-with-gcn-architecture/
 
You as bad as me - thought you were quitting here for now :D

Anyway, on to your point. Maybe Nvidia's performance in comparison to AMD on Batman AO has nothing to do with Gameworks?

Maybe it's just AMD's bad drivers? :D

No, seriously though, you can't say without a doubt that it has anything to do with Gameworks. You can make assumptions around it, of course...

But maybe it's more to do with Warner messing AMD around.

How old are those benches anyway? Too lazy to go back and find the post/source...



Physx is designed with Nvidia GPU's in mind, but not tied to it :)

Lol i had good intentions but you know how people like to drag you back in. :D To be fair i wanted to reply to Uber as he asked a genuine question. Its a waste of time replying to most though.

The only change is removing AA, which does go hand in hand with what AMD said. That their driver added 35% more performance to AA which allowed them to overpower the GameWorks advantage via brute force and take the lead.

It may well be the drivers, after all GameWorks has limited the optimisations that AMD can provide for it. Because of GameWorks AMD or the dev cannot apply any quick fixes to correct things, only Nvidia can. This has all been explained in the article and discussed previously though.

AMD is no longer in control of its own performance. While GameWorks doesn’t technically lock vendors into Nvidia solutions, a developer that wanted to support both companies equally would have to work with AMD and Nvidia from the beginning of the development cycle to create a vendor-specific code path. It’s impossible for AMD to provide a quick after-launch fix.

This kind of maneuver ultimately hurts developers in the guise of helping them. Even if the developers at Ubisoft or WB Montreal wanted to help AMD improve its performance, they can’t.

Physx is tied to Nvidia gpu's so much so it smells a AMD gpu is the same room it goes into full retreat lol. :p

Currently it is still proprietary though, until such a time other vendors choose to use it. I can't see it being free though.

Also if AMDs PR on twitter have gotten that wrong it goes a stretch to other things being posted. More to the point the comments made regarding the WB situation. With my close ties and experiences with ASUS PR teams, they tend to not have much technical insight at all other than what they're being told.

AMD changed tack regarding that. Initially they said it would be GCN only, that excluded older AMD cards as well. Then they said all cards will be able to support it, but GCN may benefit the most. This is a good thing and should be applauded. Supporting older AMD gpu's like cayman and even Nvidia gpus is a good step towards making Mantle a success.
 
Last edited:
as has been mentioned many many times, it ISN'T a gameworks feature that does that, all of the gameworks features can be turned off, what remains is FXAA and the game itself, FXAA can be turned off and you can even use SMAA in it's place

the article and your blind belief in everything it says only proves that on ONE particular set of settings is there a problem

the article itself even says that it cannot be proven that it is a gameworks feature that causes this

you keep quoting me and responding to small snippets of what I've said whilst completely ignoring the actual main point of my posts

In Arkham Origins, the following Gameworks libraries are used:

GFSDK_GSA
GFSDK_NVDOF_LIB (Depth of Field)
GFSDK_PSM
GFSDK_ShadowLib (Soft shadows)
GFSDK_SSAO (Ambient Occlusion)

Clearly the GW library loadout is customized and tailored depending on the title. These are the libraries and functions AMD cannot optimize. The fact that AMD can optimize the game and improve performance 35% due to other changes does not change the fact that GW-specific changes are locked out.

you also keep saying that a 660 beating a 7950 is bad, but a 7870 beating a 680 is perfectly ok, standards, double, much?

As quoted previously, that was due to Nvidia drivers and Nvidia had the chance to work with the dev after the games launch to fix the issues. Something which dev nor gpu vendor is unable to do with GameWorks unless the vendor in question is Nvidia. If you can't see the problem there, theres no hope for you at all.

We’ve been working closely with NVIDIA to address the issues experienced by some Tomb Raider players. In conjunction with this patch, NVIDIA will be releasing updated drivers that help to improve stability and performance of Tomb Raider on NVIDIA GeForce GPUs. We are continuing to work together to resolve any remaining outstanding issues. We recommend that GeForce users update to the latest GeForce 314.21 drivers (posting today) for the best experience in Tomb Raider.
 
Last edited:
Stop saying that, because you don't know that. You're having to repeat yourself because you're not backing up what you're saying other than with what WCCF are saying.

ExtremeTech say that. Wccftech say that. Seeking Alpha say that. Its not my fault you don't like it or don't believe its true. I can promise you it is. If you don't believe me, fine. Lets just leave it there. :)
 

I'm done repeating myself over and over.

Read the ET article but most importantly read the comments section. The author explains why that is the case.

As quoted previously, that was due to Nvidia drivers and Nvidia had the chance to work with the dev after the games launch to fix the issues. Something which dev nor gpu vendor is unable to do with GameWorks unless the vendor in question is Nvidia. If you can't see the problem there, theres no hope for you at all.
 
Last edited:
Firstly,

He's said there that it's impossible, which is an absolute - for AMD to apply a quick after launch fix.

Stay with me. AMD released a driver which fixed multi sampling performance.
13.11 beta


35% improvement when using 8X MSAA. 35% is a huge chunk of performance. Performance which is gained entirely with multi sampling alone. Changes made outside of Gameworks as you've been so keen to point out up until now.

I refer back to my premature drivers comment regarding the above.



Clearly a blanket statement as they have no evidence to back this up. It's very well written so credit where it's due.



So again, they're working on the assumption of the WB scenario leading you on to believe that it's going to possibly reoccur.

You're being lead on because frankly I think it's what you want to hear. It's been fun but time to move on for me.

Even if there are no overt penalties, the fact that AMD cannot optimise for it (which has been my bone of contention since the start) regarding tessellation, HBAO/Ambient Occlusion/multi gpu performance & scaling at driver or dev level is a serious problem. That means even if Nvidia are not nerfing AMD performance they are not allowing them to optimise for it at driver level. This is unprecedented. Although there may be no proof that they have nerfed performance, as we don't know what the locked libraries of GW mean, well not all of them anyway.

GFSDK_GSA
GFSDK_NVDOF_LIB (Depth of Field)
GFSDK_PSM
GFSDK_ShadowLib (Soft shadows)
GFSDK_SSAO (Ambient Occlusion)

Its likely that there is some foul play going on. As a 660 which is a low to mid range part does not suddenly beat a mid to high end part like a 7950 boost unless something fishy is going on. Same with a 770 vs a 290X. Without AA and AMD leveraging the brute force performance of GCN there are clearly engine optimizations at play which tilt things towards Nvidia. Joel says as much.

The R9 290X wins the 8x MSAA tests precisely because once you hammer the GPU *enough*, the heaviest-hitting solution barely manages to eke out a win. That does not change the fact that the results are quite different when we *don't* crank up the MSAA enough to counteract the various engine optimizations that are tilting the game towards Nvidia.

Clearly the GW library loadout is customized and tailored depending on the title. These are the libraries and functions AMD cannot optimize. The fact that AMD can optimize the game and improve performance 35% due to other changes does not change the fact that GW-specific changes are locked out. And I believe the original story makes this distinction quite clear.

Firstly,

He's said there that it's impossible, which is an absolute - for AMD to apply a quick after launch fix.

Stay with me. AMD released a driver which fixed multi sampling performance.
13.11 beta


35% improvement when using 8X MSAA. 35% is a huge chunk of performance. Performance which is gained entirely with multi sampling alone. Changes made outside of Gameworks as you've been so keen to point out up until now.

Also whilst on the subject of poor AA performance:

So not only were AMD able to update their crossfire profile, but also improve AA performance in this title also in a separate engine (GameWorks is engine specific).

If you'd actually read what i had said earlier you'd know the answer to these questions. Here we go again...

AMD said:
We improved AA performance with Batman. GCN is much, much stronger than Kepler in MSAA, so with that feature activated we can overpower the Gameworks advantage and pull ahead.

MSAA is something you do not need game code/dev cooperation to optimize for.
 
Last edited:
Again, rubbish statement. 35% improvement in MSSA alone if anything proves that MSAA pperformance was largely not optimised. To say that kind of improvement is more an indication of foul play instead of it being poor driver optimisation is laughable.

Again maybe you need reminding that FXAA is an NV Technology. It is also not part of GameWorks.

If you look across the board in release notes since the 290 launched there are anti aliasing improvments left right and centre. More so than any other changes.

Whole things a joke which is why its been entertaining.

I refer you to my previous reply to you earlier in the thread, as i expected this sort of answer from you.

Its not my fault you don't like it or don't believe its true. I can promise you it is. If you don't believe me, fine. Lets just leave it there. :)

EDIT

Regarding your edit

FXAA costs 1-2 fps no more so i don't buy your theory that Nvidia is better at FXAA than AMD. Also FXAA may not be one of the engine optimizations that tilts things towards Nvidia. I never said it was, you did earlier in the thread. I don't know for sure what it is, neither do you. All we do know is something is up because a 660 does not beat a 7950 boost and a 290X does not lose out to a 770 in normal situations.
 
Last edited:
the problem is matt, is that the reason that some of us don't believe what the article says is because we know for a fact that parts of it are not factually correct

developers use libraries all the time, they don't get the source code for them yet both sides are able to optimise drivers for them, you are claiming that without the source code this is impossible, but this simply isn't true

you are wrong, and so is the article, that is the bottom line

tressfx is a library, Nvidia have never been given the source code, yet you say this is fine and dandy, but game works isn't, also despite the fact that with all of the gameworks features disabled the game still shows the vendor bias, it is clearly not gameworks that is responsible

Thing is Andy, well respected, non biased tech journalist > your opinion. That's how most of us see it.

Also its been explained to you numerous times the differences between TressFX/Mantle and GameWorks but you choose to ignore them each time. Not going over it again and again. Its been explained to you what the difference is. If you're still struggling to see the difference check previous posts.
 
no one has explained to my why tressfx is any different in an actual technically correct fashion - you told me that tressfx was open because you could download an SDK, which is utter nonsense as an SDK doesn't stop tressfx from being a library

The difference is Nvidia retain full control over their performance and optimisation using drivers with TressFX. As i posted earlier unlike GameWorks the dev is able to work with Nvidia to optimise game code and driver performance. Something that is not possible on the closed libraries of GameWorks. At the end of the day TressFX is hair physics using DirectCompute. DirectCompute is an API that supports general-purpose computing on graphics processing units on Microsoft Windows. TressFX is just a software library that uses DirectCompute.

Nvidia’s GameWorks contains libraries that tell the GPU how to render shadows, implement ambient occlusion, or illuminate objects.
In Nvidia’s GameWorks program, though, all the libraries are closed. You can see the files in games like Arkham City or Assassin’s Creed IV — the file names start with the GFSDK prefix. However, developers can’t see into those libraries to analyze or optimize the shader code. Since developers can’t see into the libraries, AMD can’t see into them either — and that makes it nearly impossible to optimize driver code.

I used Extreme Tech's description of GameWorks as he has spent a month researching the article, his words.

Tress FX isn't really much of a talking point as it does very little.
The more sane of us can just sit back now I think.

35% multi sampling improvment on GCN which is largely more capable of handling it than Kep. Nothing to do with poor optimisation though guyz.

Too late to go back on this now Matty in the event some more info is given from reliable sources. :D

I stated the 35% improvement performance early in the thread when Greg mentioned it. You need to keep up. :)

Like it or lump it, optimizing for AA does not require game code/dev cooperation so AMD are able to leverage the superior AA performance of GCN to overpower Kepler and the GameWorks advantage. It was stated previously, numerous times what the optimisation was actually referring to. AA was not part of it.
 
Last edited:
Seems Joel is a member of Beyond3D. They seem to have a pretty good take on GameWorks. Take a read of the thread. I pulled out a few quotes from Joel as several are what ive been saying from the start only to be told 'im wrong'.

As The Author
Just to clear up a few points:

1). I looked hard for smoking guns. I checked multiple driver versions on both AMD and NV hardware to see if I could find evidence that one vendor took a harder hit than the other when performing a given DX11 task. There aren't any, other than tessellation in AO.

My best understanding, however, is that AMD and NV both typically optimize a title by working with the developer to create best-case HLSL code. With GameWorks, NV controls the HLSL, and the developer either cannot access that code directly or cannot share it with AMD.

Therefore: Even if AMD and NV both take a 10% hit when enabling a given function, NV has been able to optimize the code. AMD cannot.

2). Implementing an AMD-specific code path or library is something that can only be done when a title is in development. Developers cannot finish a game, launch it, and then just turn around and patch in an equivalent AMD library. Or rather, perhaps they technically *could*, but not without a non-trivial amount of time and effort.

If I'm wrong on either of these points, I'd welcome additional information. But even if no smoking gun exists today, this seems to represent a genuine shift in the balance of power between the two vendors. I believe this is different than Mantle because GameWorks is a closed system that prevents AMD from optimizing, whereas Mantle does not prevent NV from optimizing its own DX11 code paths.

We've seen what happens when one vendor controls another vendor's performance. Sabotage. Obfuscation. It's too easy for the company that controls the performance levers to start twisting them in the face of strong competition.

I don't know if Nvidia is banning developers from doing things (they have stated to me that developers are free to implement other solutions if they choose.) I think the larger problem is the difficulty of implementing an entirely separate code path for AMD.

With game costs skyrocketing and multiple game studio closures last year, sure, there are studios like Activision-Blizzard or Bethesda that can write their own tickets and use any tech they want. But smaller devs and studios don't have that kind of negotiating power, and business decisions can still tilt the market. NV holds something like 70% of the total discrete space -- given the other pressures on premium game development, it's not hard to see why suits might see the situation differently than the actual programmers.

But the inability to optimize is what bugs me about this. We need a general market in which AMD, NV, and Intel can all optimize against a title without slamming into game functions they can't touch. AMD presented the problem as significant, and while I acknowledge that they're most definitely a biased party, it still seems a potential problem.


Point of the story.
The point of the article is about more than overtessellation in one title. The point of the article is that closed libraries have the potential to create even more of a walled garden GPU effect.

Right now, a conventional "Gaming Evolved" or "TWIMTBP" title ships out optimized for one vendor but can still be optimized for the other post-launch. GameWorks changes that.

I consider this problematic because we've seen how companies can abuse this kind of power. 12 years ago, Intel began shipping versions of its compiler that refused to optimize for AMD hardware, even though AMD had paid Intel for the right to implement certain SIMD instruction sets. This fact wasn't widely known for years. Instead, people concluded that AMD's implementation of the various SIMD sets must have been sub-optimal, because it didn't benefit from using SSE or SSE2 the way Intel did. Since K8's implementation of SSE2 was only 64-bits wide, the conclusion was that AMD had fumbled the ball in that regard. In reality, Intel's compilers would refuse to create the most advantageous code paths for AMD hardware.

Ordinary consumers don't care about closed libraries any more than they cared about compilers. They care about seeing games run well on the hardware they purchase. And the problem I have with GameWorks, in a nutshell, is that it gives NV control over AMD (and Intel) GPU performance in specific areas. If NV's closed-source libraries are used in all cases, then the temptation to sabotage the competition's performance is huge.

Even if AMD can fight back by creating its own library program, it's still exacerbates a walled-garden approach that doesn't ultimately benefit the end *user.*

That's the point of the article.

The over tessellation and generally slow performance are just an example of how easy it is to create odd results. Even after analyzing the R9 290X's draw calls, it's not clear why the R9 290X is evenly matched against the GTX 770. It just is.

This article combines the tessellation and GameWorks discussion because that's how the story came together. When I began the investigation, I didn't know what I'd find. And I trust AMD's word on much of this partly because, in the course of working with the company, they had the opportunity to lie about smoking guns -- and didn't.

Instead of rushing to judgment with a batch of questionable data relying on old drivers and some of the initial comparisons between AMD and NV in games like Splinter Cell or Assassin's Creed IV, I took the time to chase down performance errata (Splinter Cell's patching process is basically made with wasps and sandpaper). A big expose on how NV had already crippled AMD's performance would have driven a lot more short-term traffic. But that's not what's happening here.

The takeaway isn't "AMD Good, Nvidia Bad." The takeaway is that giving Company A control over Company B's performance is never, ever a good bet for the end-user or any kind of fair competition.

@Andy, seeing as you keep banging on about TressFX, from the very same thread...

V6arFKa.jpg.png

Source
http://beyond3d.com/showthread.php?t=64757
 
Last edited:
I find this thread very interesting. I can see Matt has been chatting to some people, as he would never know half this stuff. Maybe Thracks is feeding him ;)

I knew those constant tweets would pay off :p

Greg you would be proud...

Just played a round of BF4 with mates and who should get in my LAV? You guessed it, OcuK forum legend and all round nice guy, AMD Roy!!


p1SCSQv.png
 
LMAO, If I was in the lav and he jumped in, I would have jumped straight out and C4'ed the ******

The biggest problem with this thread and including myself, is we are all dug in so deep, no one is prepared to budge an inch.

Nvidia have GameWorks, AMD have Mantle, Nvidia have Tessellation, AMD have TressFX... This is bad, that is bad and no one is listening to the others argument. Rightly or wrongly, we believe what we know and dismiss things that don't suit. (We are all doing it).

Some very valid points from both parties but I feel we should call it a day and talk about how well Manchester Utd are doing :p

Ohhh and welcome back Suarez. :)

No arguments here. :D
 
Back
Top Bottom