• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

Thanks for taking the time to sign up Joel.

For everyone else i decided to contact Joel via email and direct him to the thread. His post got missed as it was awaiting moderator approval on page21. :)

http://forums.overclockers.co.uk/member.php?u=156899

Disclosure: I'm the author of the story being discussed here. I'm jumping in to clarify the question of whether AMD "handed" me this story.

When I attended APU13, AMD told me there were discrepancies in Arkham Origins related to GameWorks, yes. I checked game reviews and noted that the published results for AO were extremely odd.

Once I returned home, I set up test beds using AMD and Nvidia hardware that was previously provided for standard reviews. I tested a Sandy Bridge-E and a Haswell box, and later built a Windows 8.1 system on an Ivy Bridge-E to confirm that I wasn't seeing issues that were tied to Windows 7.

After AMD told me about the problem, I reached out to Nvidia with some questions on GameWorks and attempted to contact WBM. I also tested GW performance in multiple additional titles. AMD did provide some assistance in setting up and using their own GPUPerfStudio monitoring program on Radeon cards, but did *not* provide the results I published. I gathered the raw data myself, on systems I built and configured myself. The content and focus of the article were chosen by myself. It was my idea to test AO directly against AC and I decided to look for problems in ACIV and Splinter Cell.

One question that's been hotly debated here is the impact of FXAA. It should be noted that with FXAA off (no AA whatsoever), the R9 290X runs at 152 FPS vs. 149 FPS for the GTX 770. In other words, it's not that FXAA is running slowly on AMD cards, but that AMD cards run slowly, period. The only way to change this is to turn on MSAA, which hammers the card hard enough for the R9 290X to brute force its way past the GTX 770.

I received absolutely no compensation or consideration of any kind, implied or overt, from AMD. I was paid a standard fee by ExtremeTech for my work. The hardware I used for this comparison was hardware I already had on hand. It is not my property, but the property of my employer.
 
Last edited:
Thanks again.:)

BAO High FXAA

290X@1000/5000MHz([email protected]) -17% min/-23% avg/-22% max, slower than a 780@1000/6000MHz([email protected])

BAO 8XMSAA

290X +9% min/+12% avg/-18% max compared to the 780.

:confused:

What the hell? How can there by 35% difference between the averages of a 290X and a 780 depending on if AA is used or not? Something is not right there.

EDIT

At least it seems AMD were telling the truth about the 35% performance increase when using AA via that driver.
 
I've asked Joel to sign up and he has done so but his posts are awaiting moderation it seems. Come on dons/staff members, pull your fingers out. :p

The lengths you'll go to at times Matt to try and gain a "win" is slightly worrying.

Once a few people in this thread started questioning his credibility because he was saying things they didn't like i thought it would be the sensible option to allow him a reply to the OcuK 'forum experts'. It was not my place to try and defend him or argue the points made as i lack the knowledge, as do many here. Joel on the other hand does not, is not biased, works with both companies and is a tech journalist. Much more qualified and he can clear up any questions much easier.

The whole "locking down on PhysX" approach that Nvidia pulled when a AMD GPU is detected despite having a Nvidia card as dedicated PhysX card is already enough to cause for concern on how the "Closed Library" will be handled (or abused). Anyone that say they are not worried and it's not gonna be an issue (even with the PhysX lock down approach considered), I don't know if they are naive, or have blind faith in Nvidia...

Looking at Tommys and pgi's results, something is not right. As i said before and it was explained away as being 'perfectly normal'. You should never see a 660 beating a 7950 boost or a 770 besting a 290X. I still think theres more to come from this.
 
The thing I find funny is that you can basically replace the word GameWorks with Mantle in both those quotes and they would still be perfectly applicable, the only difference is they would get you flamed ^^

I'm not saying either is good, I think both cases suck, I just think it's funny that when Nvidia do something it's bad cos their evil but when AMD do it then it's great because the're the underdogs and stuffs :P

This is the difference.

Mantle: Optimized for AMD. Does not prevent Nvidia from optimizing its drivers for DX11 games.

GameWorks: Optimized for Nvidia. Prevents AMD from optimizing its drivers for DX11 games.

If you do not understand how these things are different when it's broken down in that fashion, I do not know how to explain it to you. GameWorks prevents AMD from ensuring games run well on AMD hardware. Mantle does NOT prevent NV from optimizing games for NV hardware.
 
Matt, the article itself shows that AMD released a driver update for ghosts that fixed hbao+
So a GW feature does not in any way prevent AMD from optimising their drivers for GW features

It simply isnt true

The driver AMD released for Blacklist optimizes MSAA, not HBAO.

Blacklists HBAO is an NVIDIA-proprietary algorithm that only runs on their hardware (locked code).
 
Last edited:
So turn off HBAO. Or are you suggesting nVidia should spend time and money on a project for AMD?

I'm suggesting Nvidia allow AMD to optimise. This is how its always worked...until now.

bzzzt, not according to the article

The article is wrong. AMD cannot optimise for GW HBAO. They can still use it they just can't optimise for it, unlike Nvidia.

If im wrong on this Joel please correct me.
 
Last edited:
No its not like Mantel because AMD cards is still going through some of GameWorks= nVidia tech while with Mantel=AMD tech, nVidia cards dont go through any of it so are not hampered by it or restricted by it at any level.

It doesn't matter how many times you say it or explain it. They choose not to listen. You're Just wasting your time. If they can't understand the very basic difference by now, you'll never get them to understand it. It comes down to their gpu preference and they won't hear anything said against it.
 
Had to laugh :D :p

I hear and see plenty said against AMD on these forums and have no problem with it. Doesn't always work the same way though. At this stage Nvidia could release a press conference saying the libraries are locked and they will not allow AMD to optimize its own performance/drivers. Yet still you'd have a few people in this thread saying but what about Mantle? :D
 
LtMatt,

I don't think you're wrong about Blacklist's HBAO+ performance. But let's use this as a jumping off point.

In the original TechSpot coverage of Splinter Cell, we see the following performance characteristics at 1920x1080 with HBAO+ enabled vs. disabled:
http://static.techspot.com/articles-info/706/bench/Ultra_02.png
http://static.techspot.com/articles-info/706/bench/Ultra_HBAOoff_02.png

I'm only concerned with the GTX 770, because that's the card I tested here. Performance goes from 70 FPS (HBAO+ on) to 73 FPS (Field AO).

The Radeon 7970 (no 290X, wasn't launched yet) goes from 64 FPS (HBAO+ on) to 78 FPS (HBAO+ Off). Clearly the 7970 is taking a heavier hit at launch. With Field AO, it's the second-fastest card. With HBAO+ On, it's the fifth fastest card. What we care about, in this case, is the % hit. The GTX 770 takes a 4.2% hit from using HBAO+. The 7970 is taking an 18% hit.

Here's what I saw when I benchmarked a fully patched sequence of the game with the latest drivers for both AMD and NV: (V-Sync Off, all details maxed, FXAA). Figures are HBAO+ on vs. Field AO.

GTX 770: 80 FPS / 97.5 FPS.
R9 290X: 94 FPS / 113 FPS.

What this tells us is that the performance characteristics of both video cards have changed. Both cards are far faster than they were in the Tech Spot test. The GTX 770 now gives up 22% performance when AO is enabled, while TS had it logged at 4.2%. But despite that hit, the GTX 770 is now running 14.2% faster than it was previously.

I couldn't benchmark the R9 290X in the original 1.0 version of the game because the 1.0 version BSODs almost immediately upon attempting to play with that GPU installed. What we see in the 1.03 version is that AMD still takes virtually the same size hit on the R9 290X in the 1.03 version as it took on the 1.00 version = about 17%.

The R9 290X's performance in my test is about 46% faster than the 7970 that TechSpot logged. That's significantly larger than the average gap between those two cards, which is typically 28% - 35%. We can therefore conclude that yes, AMD optimized some driver functions that sped the GPU up in other ways. But did AMD specifically optimize the HBAO+ function? I'm aware of no evidence that says they did.

The biggest change related to HBAO+ performance in Splinter Cell: Blacklist is that the GTX 770 appears to take a much larger hit when enabling that mode in 1.03 than it did in 1.0. Obviously this depends on TS's figures being representative of the final game, etc. But what it suggests is that other factors on the *Nvidia* side of the equation were limiting the GTX 770's performance artificially and therefore hiding the penalty of activating HBAO+ in the 1.0 version of the game.

Splinter Cell: Blacklist does not demonstrate that AMD optimized the HBAO+ function.

Interesting, thats a lot of data you have there. Thanks.
 
Who really knows whats going on, I for one don't have a clue as no official comments on this story from amd or nvidia or gameworks have been seen.

The real question is if its true and gameworks were ''' lets say told by nvidia to hamper amd performance on this game then that is bad news with games that use gameworks libraries for any image enhancements and/or performance on amd cards.

I do not think anyone agrees with it in principal ( if true ) , its just that no one knows what is going on for sure between the companies.

Yeah can't argue with that viewpoint Ian. :)
 
http://www.extremetech.com/extreme/...surps-power-from-developers-end-users-and-amd

Following on from this thread, Joel has apologised for the misinformation he previously said.

Just to copy over what i said in the other thread. Good to see Nvidia confessing though and it was nice to finally be proved right in what i said all along since the start. AMD unable to provide their own performance optimizations using their own drivers on any games that use GameWorks code.

Looks like Joel has updated the article (having heard back from Nvidia) and Nvidia have finally fessed up. Lol this is gold!!



Source
http://www.extremetech.com/extreme/...surps-power-from-developers-end-users-and-amd

So all that crap Joel and i took and Nvidia finally own up. AMD unable to provide their own performance optimizations in THEIR OWN DRIVERS when the game uses GameWorks code.

No matter how you dress it up, it stinks. One side able to optimize, the other not. Thank god not many titles use GameWorks. If ever there is a reason for any respected developer not to use GameWorks code, this is it. If AMD/Mantle ever stopped Nvidia from providing performance optimizations for DX11 then im sure you would be equally as concerned. That's my final say on the matter. Thanks to Frosty for making me check the article again though. Without his prompting i would've missed the Nvidia confession. Its funny because in the other GW thread he kept saying that he needed proof that AMD were not able to optimize their own drivers when using GameWorks code. Well now we have it from the horses mouth that its not possible.

That's my final say on the matter as well. Now Nvidia have confessed, no need for pages of pointless debates going round in circles. I'll leave that to Frosty.
 
Confessed to what though? They are tight on this new tech, as AMD are with theirs so I don't see what the problem is.

Mantle/DX/GameWorks are all closed libraries, so why is this an issue?

Because Mantle does not block Nvidia from optimizing their own drivers for Mantle capable games that also use DX11.1, ala Battlefield 4/Thief etc. You can't compare GameWorks to Mantle, or even TressFX. TressFX has its own SDK available for download btw.
 
But GameWorks is nVidia tech, so you are getting the GameWorks effects and the optimizations are done via nVidia. You can see from the Batman: Arkham Origins thread how that works for AMD...

Yes if FXAA is used then a 770 is faster than a 290X. Its cause for concern.

Mantle and DX are separate code paths and NV can optimize the DX path of a game, mantle does not effect the DX path in any way..

GameWorks is not a separate code path to DX and AMD can not fully optimize for it.

100% spot on.

I need to just stop posting here, because even with Nvidia now saying it some just won't accept it and i need to accept that so this will be my last post. Greg don't you try and drag me back in you swine lol. :D
 
I accept it and I don't see the issue. Why would you run a game with only FXAA on a 290X? The arguments have always been that when you turn the dials up, then "X" GPU really starts to shine.... Not when you turn the dials down lol

So a GTX 660 is fater than a 7970 with all the settings lowered... Yer ok :rolleyes:

How many people use 290X's or high end gpu's though? A small fraction. How many use low end to mid end gpu's? The majority. So now we have a situation where the majority will be using low to mid range cards and will be using FXAA.

The situation is a 660 is able to beat out a 7950 boost, which is a substantially faster and more expensive card with FXAA on, ala the GameWorks effect. Only when x8 AA is forced (AMD can optimize for AA in the drivers as its not tied to the GW Library - Confirmed by Joel and AMD) can they over power the GW effect. Its a cause for concern if this starts happening in more demanding titles.


660 faster than 7950 boost - 770 faster than 290X (how is that possible???)
NUssdvo.png


x8 AA
NsPm7D0.png
 
Last edited:
The fact that they are both closed is not the point, the point is about optimization in games and with Mantle and DX being separate code paths and as things stand games that use Mantle path also use a DX path so optimization is not an issue for AMD or NV.

But for gameworks that is not the case, optimization is an issue for AMD because there is no separate path.

You explain it better than i can. Now await the incoming deflection. :D
 
So in Thief we should expect to see a 770 faster than a 290X and a 660 beating out a 7950 boost? I bet you won't. Your example is poor and does not work because GameWorks libraries are not part of Thief. The engine may be the same and the engine may favour Nvidia, but it does not explain the results in question at all.
 
Back
Top Bottom