• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

35% on what card Matt? 290? That new card? So putting stupid amounts of MS which is where the memory bus on the 290 would be faster anyway prove there is something a miss?

7950 performance or any Tahitti card in that list kind of defeats your arguement in a way doesn't it?

4x mssa would make much, much more sense

35% on a 660 vs a 7950 yet somehow a 660 is faster than a 7950 boost card without AA and the amd performance optimizations. Crazy eh? 580 faster than a 7950 yet 19% slower with AA applied. How either of those cards are faster than a 7950 with fxaa only comes down to GameWorks.

4xmsaa would not change things much, if at all. Its not a very demanding game as already proved.

Yes.

They hold up Mantle as being "open", along with getting up on their high horses about TressFX, which was released to drive adoption of directcompute at a time where the leading NV card - the 680 - was well known to have a much harder time dealing with intensive DirectCompute tasks and pushed the 680 to be level with a 7870 in tombraider (level with a 7970 without tressfx running).

Shoe, foot, other one, tears before bed time.

Mantle will be open once its finished. Its the only way it will become successful. Its up to Nvidia to support it. I don't think they will.

Regarding the 680 wasn't it Nvidia's idea to remove the DirectCompute abilities from mid range Kepler cards? That was an interesting move considering DirectCompute is part of Directx 11 API, don't you think? I like how you have a theory that AMD use part of Dx11 because certain Nvidia cards are weak at it though. Its an interesting angle to take to try and base part of your argument on.
 
Last edited:
Amd say mantle is 'open'. We believe you! Praise Amd!

Nvidia say games or something is 'open'. Lies, lies, lies. They will cripple Amd! Die nvidia!

I'd get upset is this wasn't all so laughable.

What you fail to realise is, regardless of whether Mantle becomes open or not, is that it will not block Nvidia performance optimizations in any way. It will not harm Nvidia performance in any way shape or form, regardless of which API is used. It really is that simple.
 
To be honest with you it's not that unthinkable. Yes a 660 is a worse card in every way to a 7950 but stock 7950s aren't really that fast and when you factor in the following:

  • FXAA bonus the 660 will have
  • The fact the 192 bit bus won't be taxed at all at 1080p/FXAA settings
Then to me the results look about right. Then of course when you apply 4x MSAA the performance of the 192 bit bus card drops off completely due to the increased demand on the memory bandwidth. I think you're being too quick to blame GW without thinking about the results and what they show.

I interpret the results as doing what you would expect in the situation. You might expect the 7950 to nudge ahead of the 660 a bit to be fair at 1080p/FXAA but we're talking FPS well above 90 on stock cards so it becomes almost a little bit irrelevant.

When a card costing £130 is faster than a card costing £230-£250 you know GameWorks is taking effect. The 660 is not in the same league as a 7950 . You know something is up when its faster. There is no debate to be had on this matter. It should not be faster in any situation.

EDIT

It would be like saying why is a 7790 faster than a 770. You will never find a a situation where a 7790 is faster than a 770, unless there is foul play going on.

oh come on matt, you seriously think that AMD didn't run this in house, realise that it made a 680 look like a 7870 and realise what a marketing coup that would be?

and yes, Nvidia dropped part of fermi because no one wanted it and it added cost and heat to fermi cards, people were very critical of fermi for this reason, so with kepler they deliberately made the chip leaner for running games, AMD late to the party added DC performance to the 7 series and then suddenly started trying to convince developers to use more of it knowing that kepler was weaker at it by comparison

both firms do it, they are in the game to market their own products and make money, AMD manage to do it with an air of "we have the gamers interests at heart" but it's bull, they have money at heart

Regardless of whether AMD did it to add a cool feature to a game or only did it to hurt Nvidia performance they did not block them from optimizing for it so its irrelevant. The only thing we should be asking is why Kepler cards are lacking capable support for a DX11 standard API.
 
Last edited:
I've already told you why - devs were not using DC and "the market" was very critical of fermi for having strong CUDA/DC support when games were not using it

now that more games are using it they release the 780 which adds it back in

pretty simple market economics, not sure what you are struggling with

Well then that's their fault and not some master plan by AMD to create a DX11 standard API that they can magically start using the second Nvidia launch Direct Compute crippled Kepler cards. Tomb Raider launched a few weeks before Kepler cards. Are you telling me they managed to tack on TressFX to a finished game in a few weeks, specifically to hurt Nvidia performance?
 
excuse me? a game released in 2013 was released before a graphics card from 2012?
they had a whole year to come up with tressfx and find a game to get it added to specifically to hurt nvidia performance

it is well known that AMD had been going around trying to get devs to support DC instead of CUDA

My mistake im getting confused with the titan launch, prior to the Tomb Raider launch. Regardless Nvidia were able to optimize for tressfx, the way it should be. Something amd are unable to do for GameWorks, so it changes nothing. IF AMD wanted to hurt Nvidia performance they would not use a DX11 API standard like Direct Compute which is open and can be optimized for.

Way to ignore what I said.
SNIP

I will say it again. There is no way a 660 should be faster than a 7950 at any setting. See the post below.

A 660 is miles behind a 7950, a 660 is miles behind my GPU which is also miles behind a 7950.

The GTX 660 sits between a 7850 and a 7870.

660 vs 7950

660 vs 7870

660 vs 7850

How far does one need to get down the AMD GPU ladder to get it just less than even with the 660? an underclocked salvage binned Pitcairn
 
Sorry, forgot the ';)'

It's just that you're taking what this guy says and making it mean what you want it to mean in order to support your point of view while the general Nvidia crowd claim the usual conspiracy theory and agenda accusations whenever something negative is said about Nvidia practices and demand absolute proof.
Take the Origin PC thing for example. How blatant was that? Yet the green defenders wanted nothing short of an official statement from Nvidia saying 'Yeah, we did it' or it must all be conspiracy theory BS from AMD fanboys...

All I'm saying is bias is on both sides of the argument here and people are ignoring anything that doesn't support their pre-defined POV, while trying their hardest to put down anyone that thinks otherwise.

Can you explain how a 770 is faster than a 290X with FXAA in BAO? Whatever the NV advantage is with FXAA, we're still talking about something that only costs a couple of fps with AMD cards.

Even if they did put him onto it, why is that such a bad thing? He reckons the concerns are valid and if they are...?.
Have Nvidia never put anyone onto anything? How about the FCAT frametimes thing for example?

With BAO, the only way that AMD are running level or even slightly ahead is due to them being better at MSAA (according to them).
Sure, these are just benchmark scores and most decent cards can run the game well enough because it really isn't that hugely demanding tbh.
What happens when this is being applied to a game that does require more GPU beef?
Again, a 770 shouldn't be beating a 290X.

As it stands, yes, I agree that it's a little overblown right now. However, that doesn't mean it definitely isn't an ominous sign of things to come.

I know you wouldn't want a situation like that just as much as me or anyone that doesn't sport a GPU brand on their forehead :)

Very well said Petey but the brand defenders are here in force and don't listen to reason nor common sense. They're far to dug in so its a case of Ignore, Deflect, Ignore, Deflect etc.



So it is totally ok for AMD to develop extra iq features knowing full well that it would hurt nvidia users because "its open", even when it makes nvidias top card look like an AMD mid range card
but it isnt ok for nvidia to develop extra iq features even when multiple benches across a range of games and settings show that the 290x still beats a titan

And now that Nvidias top cards can run tressfx without a massive performance hit it hasnt been seen in any other game, funny that

If you really cant see the bias in your own statements then there really is no point in having any type of discussion

The hypocrisy is off the scale, I dont mind people having a brand preference, I prefer nvidia, but to dress it up as "concern for gamers" is laughable

Both sides work with developers and a range of games end up with a slight bias towards one range of cards or the other, it is not surprising, the claim that GW totally prevents AMD from any optimisation and that as a result all AMD cards are at a disadvantage really isnt backed up by benches, you end up having to run very specific settings and only compare specific cards for the data to support the accusation, with a wider data set then no pattern occurs

But no, you are totally right, nvidia should be banned from doing any work that benefits their customers where as AMD should be given awards for developing features and tools that block out or hurt performance for nvidia customers, that sounds completely fair

Andy andy andy. The extra IQ features as you call it are part of DirectX. I don't see how you can compare that to GameWorks which is a close library which AMD are unable to optimise for. Its pretty clear you don't understand, ive tried to explain it for you several times but you just ignore what im saying then go off on a tangent about AMD only using Direct Compute because mid range Kepler cards decided to gimp that feature at Nvidias request. Its madness.

Even funnier that you now speculate that because Nvidia have optimised their drivers for TressFX AMD have decided to stop using it. Actually AMD are working on a new and improved version of TressFX which is less demanding. It will be appearing in Star Citizen. If AMD had used a closed library like GameWorks and not an Open API like DirectCompute Nvidia would have never been able to optimise their drivers for it.

As for the claims about GameWorks its backed up perfectly via the article, via techspot and via user benches. Its only when AA is applied that AMD is able to overpower the GameWorks advantage in this particular title and pull ahead through brute force. Its not normal for a 660 to beat a 7950 boost.
 
Last edited:
tressfx is not part of DX11, it is a specific feature that was added to a game by AMD... the article you linked to even goes in to detail about how AMD hurt their own users performance and that the upshot of that was that it hurt Nvidia users more (sound familiar?)

NVidia's performance on 780 vs. 680 has nothing to do with driver optimisation, the 680 lacked hardware (at developers and gamers request) that could run that code efficiently, tressfx release was specifically targeted at making NVidia's cards look poor value and AMD cards look better value to potential customers - a marketing tool

being able to optimise your drivers for a feature is irrelevant if your hardware lacks the capability to run it - same with tressfx at the time and the same with mantle for god knows how long

mantle being "open" is only a valid point once AMD add support for Intel and Nvidia GPU's

as others have pointed out there are lots of "closed librarys" in use by devlopers other than NVidia's, and yet both sides have no problems optimising drivers for those

this entire thread is purely a case of your rabid support for anything AMD and your inherent need to have a pop at NVidia, but you can't have it both ways

I bought NVidia because I see the value in their extra features, you bought AMD because you didn't, you can't then complain when game devs leverage NVidia's tools to make their games and then on specific settings in specific circumstances the cards that you own don't perform quite as well as they do in some other games
AMD do the exact same thing and it is only your rabid support of all things rose tinted that prevents you from accepting that

TressFX uses DirectCompute which is part of DirectX11. So yes AMD used part of DirectX 11. That's completely different from what you're saying. GameWorks is not an API of DirectX 11. Its a closed library which neither the dev nor AMD can optimise for. It couldn't be more different. Even if the developer wanted to help AMD with optimisations, it cannot. That is vendor lock in right there. Not even in the same ball park as AMD using part of DX11 (the most OPEN API accesibles to everyone, Devs and Nvidia drivers) to create hair physics.

The problem with TressFX was actually Nvidia drivers. The creators of the game worked closely with Nvidia to optimise drivers shortly after the game was released. The fault lies with Nvidia for lacking for DX11 DirectCompute support. Not with AMD for using the most open standard API going.

http://www.ausgamers.com/news/read/...ressfx-memory-costing-plus-new-nvidia-drivers

We’ve been working closely with NVIDIA to address the issues experienced by some Tomb Raider players. In conjunction with this patch, NVIDIA will be releasing updated drivers that help to improve stability and performance of Tomb Raider on NVIDIA GeForce GPUs. We are continuing to work together to resolve any remaining outstanding issues. We recommend that GeForce users update to the latest GeForce 314.21 drivers (posting today) for the best experience in Tomb Raider.

So there we have it. The dev able to work with Nvidia to optimise drivers after launch, like it always is. Only this never happens with GameWorks, as most of it is locked down so only Nvidia can optimise for. See the problem now? Of course you don't. You still think AMD used DX11 to harm Nvidia.

AMD do not do the same thing and only your inability to understand whats going on prevents you from thinking otherwise. You're so entrenched in your beliefs that you cannot listen to reason. Instead preferring to ignore and deflect everything i say. Only to then try and turn it around by saying look here, AMD do the same thing when its clearly different. If you think using DX11 is similar to using GameWorks which block dev and AMD driver optimisation then you're beyond debating with as if you don't get it by now, you never will.
 
Last edited:
Take the Splinter Cell Blacklist performance for example with the 7970GHz being slower than the GTX760, most wouldn't want to take about it; imagine of it was on BF4, or other EA titles that the GTX780 was slower than the 7950...people would be bashing AMD to the ground for nerfing Nvidia's performance.

I can explain this one. Splinter Cell is not actually a 'full' GameWorks title. However it does use GameWorks HBAO, something that AMD cannot optimise for. Something that AMD can do nothing about at driver level or with dev support. I don't know for sure, but id be willing to bet you disable HBAO things would even up a bit in that game, or maybe AMD would even take the lead. It is a TWIMTBP title though, so that might be a little optimistic.
 
it IS the same thing, mantle locks nvidia out, it is a benefit for AMD users the same as physx or whatever is a benefit for nvidia users, nvidia are currently locked out of mantle, until that changes they are the exact same thing

I have no problem with AMD users getting the benefit of mantle, the same as I have no problem with nvidia users getting the benefit of gsync, physx, txaa or whatever else nvidia come up with, if a game comes out with mantle that I can't play on my nvidia hardware that I really want, then I'll buy AMD hardware to play it, the same as I have in the past bought new nvidia cards to play nvidia supported titles

The big difference is Mantle will not harm Nvidia performance, on ANY API. It will not prevent Nvidia from optimising for specific features at DEV or DRIVER level. Mantle will be completely independent from DX11. Once Mantle is finalised, it will be released and Nvidia will be able to support it, if they wish. They won't though because they're only interested in proprietary tech and vendor lock in. They will even lock out their own users from Physx if they detect an AMD card in the pc. Does not matter if you own a Nvidia gpu, because you have a AMD gpu present you must be locked out. That's how they do things and Mantle will not make them change this approach, imo.
 
Last edited:
They wont use it because it is designed to be used by amd GCN, unless nvidia redesign their new cards to use it, which wont happen.

Johann Andersson, the dice dev who has close ties to AMD and Nvidia says Mantle can work fine with other vendors, ie Nvidia. Considering he helped AMD create Mantle, id say his opinion holds considerable weight.
 
I have no doubt it will work, But will a 7950 beat a 780ti using mantle? That is the question, which is the same as your pointing out with batman.

I doubt it, but i can't answer for sure as i don't have a crystal ball. Regardless a Nvidia gpu will still see big gains opposed to the performance it would get on DX11. It will be up to Nvidia to optimise their drivers to perform well on Mantle. It might be hard to equal AMD performance, but who knows for sure. The important thing is they will be able to do it and AMD will not block them from doing so. Vendor lock in benefits no one. We need Mantle to be a success to benefit the gaming industry and move it forward. Taking a GameWorks approach and blocking Nvidia would be beneficial to no one apart from fan boys that think it would be nice to get one over on Nvidia.
 
When?
because as of today and for the foreseeable future mantle is locked to AMD hardware... WHEN (IF) they release it for NVidia cards then the argument has weight, but as of today everything you are claiming about gameworks is equally if not more so true of mantle

do you really think that AMD are going to give NVidia the entire source code for Mantle? Or will AMD be the gatekeepers of adding Nvidia support to mantle? because that is what you are asking NVidia to do with gameworks, give AMD complete access to the source code

Of course they will give them the code. For the API to succeed its going to have to be available to everyone. It will then be up to Nvidia to support it. For Mantle to be a success it needs to reach as wide an audience as possible and AMD know that and have said as much. Johann Andersson also stated the same thing, several times during his presentation. Nvidia hold an important part of the market share in the pc space so locking them out would make no sense.

One last big advantage that Mantle has over Glide is that it’s open. Any graphics silicon manufacturer can add Mantle support to their chips, and that doesn’t just mean nVidia! It means that Intel, ARM, Qualcomm, Samsung, Texas Instruments, and everyone else all have the equal opportunity to add Mantle support to their GPUs (likely in addition to Direct3D 11 and OpenGL 4.0).

Full Article
http://wccftech.com/mantle-destined-follow-glide-grave/
http://wccftech.com/amd-mantle-demo-game-gpubound-cpu-cut-2ghz/
 
IF Mantle ran on Nvidia (and seriously, we don't even know what it's going to do for AMD performance yet), does anyone actually believe it will ever run as well/better on Nvidia than it will on AMD? I certainly don't.

Where is the gaurantee of that? It's like saying physx will run on CPU. It does. But not as well.

Mantle will almost certainly be to Nvidia users, what any Nvidia optimised feature is to AMD users.

But such is life. If it were THAT important to me, I'd own AMD.

It may not perform as well as AMD cards, but it should still be a big improvement from DX11. As AMD have designed Mantle i think thats only natural. I rarely expect AMD cards to perform better on TWIMTBP titles. However once the game is released AMD can at least optimise performance to run well. With GameWorks that ability to optimise is limited, which the article reported on. It makes no sense to block AMD from optimising.

Regarding the guarantee? Well AMD and Johann have both said it. I did have an in depth interview with AMD where by they said it will become open and will be available to be supported by Nvidia but i can't find it off hand. I posted it in the Mantle thread will post it if i can find it.
 
I know, the thrust of matt's argument is that he believes that AMD should have access to the full source code for all of Nvidia's gameworks features so that they can optimise for it, not realising that no one gives out source code for anything yet they can still optimise for it, it's a complete non-point

No you misunderstand or just choose to ignore what im saying. It is not normal practice to block AMD at dev or driver level from being able to provide optisations for a game, especially when using GameWorks Nvidia are still able to do this. Its pretty basic stuff. Up until now this is not how it works. Its normally the practice with sponsored titles that once the game launches the other side is able to optimise drivers and or work with the dev to improve performance. See the Tomb Raider example we talked about before. If AMD had a GameWorks style libraries in Tomb Raider Nvidia would not have been able to improve TressFX performance in Co-op with the developer. With GameWorks this is no longer possible, even if the DEV WANTS TO HELP AMD. This is the problem.
 
If Josh had started his article ny making it clear it wasn't his findings but that AMD had made him aware; a lot of users views may have swayed. Makes him look like a puppet surely. Whole thing is a whirlwind of windup.

Some proper clarification on Gameworks stand point needs to be address ed by NV themselves or possibly another developer other than WB Studios.

So it's not the findings, it who made him aware of them that's the issue?

Lol try to discredit him because he's reported something that reflects poorly on Nvidia. :D


 
matt, on this very page you've stated that you believe that NVidia should provide the source code for game works to AMD and that AMD will provide Nvidia with the full source code for mantle - AMD have stated that this is not the case, MS don't hand out the source code for DX, it is totally not required

AMD can run a gameworks title on their own hardware and inspect what it is doing and alter their drivers to suit - the article talks about AMD asking WB to change the games code to add in AMD's alternative and WB declined, that is a matter between WB and AMD, it has nothing to do with NV

looking at a range of gameworks titles on a range of cards and settings, there are situations where certain settings favour one set of hardware or the other, looking at GE titles the same occurs, the only way this becomes "an issue" is if you stick your blinkers on and only look at a tiny sub set of data instead of looking at the whole picture

their are plenty of libraries that devs use that AMD or Nvidia don't get the source code for, it IS normal practice

Well you can download the SDK code for TressFX Andy, but you were using that as your argument a few posts back saying it was the same as GameWorks. Can you direct me to the SDK for GameWorks?

When Mantle is released it will be able to be examined in full by Nvidia so they can optimise for it. Same for Intel and all the other players.

Andy you need to re-read the article, as you're incorrect. I'm not going over it again, read it yourself. Check the comments section as well. Joel clearly explains which parts of the GameWorks library are locked to Devs and AMD, even at driver level. So AMD cannot even optimise at driver level for several different parts of GameWorks, though it seems to vary from game to game what can be done. Tessellation, HBAO, multi gpu scaling and probably other things im not aware of. Explains why crossfire just does not work in some GW titles. Call of duty ran flawlessly on AMD hardware and crossfire (scaling and performance was always outstanding) until it became GameWorks. Now look at the 7990. Aside from that the game just runs horribly on AMD. So GameWorks got SLI working, but didn't optimise for crossfire. That's ok, AMD can optimise at driver level right? Wrong, its GameWorks.


1pw6y8O.jpg


9TV1yYm.jpg
 
Last edited:
ghosts didn't work in SLI for the entire time it took for me to complete the single player campaign, can't say I've reinstalled it to check what the situation is now, but on launch it was the same situation for both vendors (performance scaling might have been good but SLI lead to scope glitches making the game unplayable with SLI on)

same with BF4, I still get artefacting on SLI, is that AMD's fault or DICE?

Ghosts launched on the 5th. When Guru3d did their review on the 7th SLI was working as i posted. Perhaps you were not using the correct driver version. Guru3d were using 'GeForce cards use the latest 331.70 Beta driver.' Regardless ghosts is a mess for AMD. The GameWorks effect.

Everyone still have the flickering textures from time to time on BF4. That's a game/multi gpu issue i believe. I get the same. If i remember correctly SLI worked on launch with BF4, didn't it? Funny that.

In fairness it appears to be working on the 690 in Gurus results. But I still can't see how GPU scaling issues can be blamed on any optimisations within the GW libraries.

Its explained in the article. Its also why crossfire never works on one of the Assassins Creed games yet SLI works fine. IF SLI can work, crossfire can work. The only obstacle is GameWorks and AMD unable to provide complete optimisation. It appears they can provide some, especially to things which don't require game code/dev cooperation like AA etc but there are keys things which ive mentioned and that were mentioned in the article, that cannot be optimised for and here in lies the problem.
 
Last edited:
Jesus Christ. How is it a mess because there isn't any Crossfire support? GameWorks effect. Care to explain to me how that is related to GW libraries?

You're making the connection to BF4 just because Nvidia had an SLI profile available? How does that make any sense???

Incredible.

You need to read it to understand.

People look at DX11 or the poor performance of Crossfire in Arkham Origins, and they blame AMD's drivers without realizing that AMD *cannot* optimize the drivers for those functions without access to libraries and support from the developer.

I present two statements I can personally verify and a third I have no reason to distrust:

1). WBM did not return my emails.
2). WBM couldn't optimize the GW libraries, even if it wanted to. (Meaning the greater issue exists and is problematic regardless of developer friendliness to AMD).
3). AMDs ability to improve Crossfire or tessellation without WBM's assistance is limited.

How else can we explain crossfire not working at all, or in some cases not very well in GameWorks titles. Crossfire scaling was always excellent in COD and superior to SLI, until it started using GameWorks.
 
Improve? I'm at work but I think you should probably look at when AMD released a Crossfire profile for Ghosts. As those frames makes it look as though there wasn't a profile at all, leave alone any optimisations. Thanks for linking back to the article though.

Have you tried playing it on a crossfire setup?

Because when i tried it was horrible. The point is, it was never like this before. Something has changed.
 
Last edited:
Back
Top Bottom