• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announces GPUOpen - Open Sourced Gaming Development

Well if that is what you really want so much then you basically already have because GW features are already optional, so if you don't like it then you can turn it off. Most of the review sites already do some tests with GW and without.

thank god, i was just imagining that most of GW titles run like crap on AMD, so much so that a pattern was starring me at my face, and poiting at gameworks.
weird hein !
 
Last edited:
Wonder if we will see a rinse and repeat of the AMD Maxwell users raging like the AMD Kepler users raged about GW's when high end Pascal drops, but I suppose the high end Pascal crowd could drown that out too...
 
Wonder if we will see a rinse and repeat of the AMD Maxwell users raging like the AMD Kepler users raged about GW's when high end Pascal drops, but I suppose the high end Pascal crowd could drown that out too...

funny that you brought that up, i was just thinking about it, seeing how Nvidia droped the ball on 700 series, and the fact that pascal will have asynchronous compute like GCN, i would expect 900 series to suffer after the next Nvidia line up shows up.
AMD ofc will grace us with another round of rebrand along with polaris, but the good news AMD users probably will see consistant performance of the 300 series with the new line up.
care to take a bet tommy ? :D
 
funny that you brought that up, i was just thinking about it, seeing how Nvidia droped the ball on 700 series, and the fact that pascal will have asynchronous compute like GCN, i would expect 900 series to suffer after the next Nvidia line up shows up.
AMD ofc will grace us with another round of rebrand along with polaris, but the good news AMD users probably will see consistant performance of the 300 series with the new line up.
care to take a bet tommy ? :D

Nope i expect AMD to give us a ground up lineup with no refreshes. I think they have already confirmed this. 2 new gpu's to cover the whole line up.

If they slip in a few rebrands oit would not surprise me but they are on record i think saying that wont be the case.

It's usually the low and mid end that get rebrand's but they have shown the low end part so i cant see much if any rebrand's on release.
 
Last edited:
Nope i expect AMD to give us a ground up lineup with no refreshes. I think they have already confirmed this. 2 new gpu's to cover the whole line up.

If they slip in a few rebrands oit would not surprise me but they are on record i think saying that wont be the case.

It's usually the low and mid end that get rebrand's but they have shown the low end part so i cant see much if any rebrand's on release.
edited : ok never mind, i thought they have like one specific for desktop polaris 11 and polaris 10 for laptop, ok so they get 590/X and 480/X with polaris, am guessing hawaii goes to 470/X, and tonga to 460/X
 
Last edited:
funny that you brought that up, i was just thinking about it, seeing how Nvidia droped the ball on 700 series, and the fact that pascal will have asynchronous compute like GCN, i would expect 900 series to suffer after the next Nvidia line up shows up.
AMD ofc will grace us with another round of rebrand along with polaris, but the good news AMD users probably will see consistant performance of the 300 series with the new line up.
care to take a bet tommy ? :D

Well in the only DX12 benchmark we have that gives actual performance measures for Async compute the 980Ti wins handily:
http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis

xJfEq4N.png


Dynamic Global illumination uses an Async compute shader and is much faster on the 980Ti.

The GBuffer is geometry based so it is no surprised that Maxwell pulls a substantial lead, and the offer effects are pixel bound which the Fiji architecture is better at (which is why it scales better with resolution narrowing the performance gap).



This is all based on a beta benchmark and is subject to change of course but I'm fed up of hearing all these baseless myths and propaganda about Nvidia not handling Async very well. Nvidia supports Async going all the way back to the Fermi!
 
Nope i expect AMD to give us a ground up lineup with no refreshes. I think they have already confirmed this. 2 new gpu's to cover the whole line up.

If they slip in a few rebrands oit would not surprise me but they are on record i think saying that wont be the case.

It's usually the low and mid end that get rebrand's but they have shown the low end part so i cant see much if any rebrand's on release.

There is a big cost saving if they can shift all chips to 14/16nm, and since they have lost a lot of market share to Nvidia based on the power hungry old architecture I'm sure they are going to push everything to a Polaris architecture on 16nm. Its not trivial makign a direct optical shrink from one node to the next, there is already going to have to be a lot of work, so if they are going to do that then they might as well switch to Polaris.
 
So what you are saying is then D.P is that NV can do Async fine and has no problems with context switching of any sort on the hardware level ?
 
Well in the only DX12 benchmark we have that gives actual performance measures for Async compute the 980Ti wins handily:
http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis

xJfEq4N.png


Dynamic Global illumination uses an Async compute shader and is much faster on the 980Ti.

The GBuffer is geometry based so it is no surprised that Maxwell pulls a substantial lead, and the offer effects are pixel bound which the Fiji architecture is better at (which is why it scales better with resolution narrowing the performance gap).



This is all based on a beta benchmark and is subject to change of course but I'm fed up of hearing all these baseless myths and propaganda about Nvidia not handling Async very well. Nvidia supports Async going all the way back to the Fermi!

Isn't the question more around the ability of everything below the 980Ti to handle Async Compute? The ATOS (or whatever) benchmark showed the 980Ti doing fine but the cards below it in the line-up seemed to do quite a bit worse as I recall.
 
Well in the only DX12 benchmark we have that gives actual performance measures for Async compute the 980Ti wins handily:
http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis

xJfEq4N.png


Dynamic Global illumination uses an Async compute shader and is much faster on the 980Ti.

The GBuffer is geometry based so it is no surprised that Maxwell pulls a substantial lead, and the offer effects are pixel bound which the Fiji architecture is better at (which is why it scales better with resolution narrowing the performance gap).



This is all based on a beta benchmark and is subject to change of course but I'm fed up of hearing all these baseless myths and propaganda about Nvidia not handling Async very well. Nvidia supports Async going all the way back to the Fermi!

from what iv read, is that maxwell is faster on small workload, but GCN better at higher workload, again my opinion is based on what i read like you, when games are out we will see how it goes anyway.
 
care to take a bet tommy ? :D

Naw mate, I gave up gambling when I chucked AMD.:p

Isn't the question more around the ability of everything below the 980Ti to handle Async Compute? The ATOS (or whatever) benchmark showed the 980Ti doing fine but the cards below it in the line-up seemed to do quite a bit worse as I recall.

The same Anand Fable article he used showed anything Nvidia outwith GM200 performing worse than AMD.
 
Well in the only DX12 benchmark we have that gives actual performance measures for Async compute the 980Ti wins handily:
http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis

xJfEq4N.png


Dynamic Global illumination uses an Async compute shader and is much faster on the 980Ti.

The GBuffer is geometry based so it is no surprised that Maxwell pulls a substantial lead, and the offer effects are pixel bound which the Fiji architecture is better at (which is why it scales better with resolution narrowing the performance gap).



This is all based on a beta benchmark and is subject to change of course but I'm fed up of hearing all these baseless myths and propaganda about Nvidia not handling Async very well. Nvidia supports Async going all the way back to the Fermi!

So with all that in mind can anyone give us the answer to why Async was available to Xbone on ROTTR and left out of the PC version? Would it be that far a stretch to say that Nvidia possibly asked CD to leave out anything that would give AMD cards a performance advantage at this point in time as they did with Ashes Of The Singularity???
 
The thing is Nvidia can easily just stipulate that Gameworks cannot run on non-Nvidia (i.e. AMD) hardware and Nvidia will simply concentrate on optimizing GW for their own architecture.

That strait away nullifies your concern. Nvidia is then not guilty of doing anything that hampers AMD's performance and are free to do what they like with GW.

This way Nvidia is providing an incentive to buy Nvidia hardware without
any question of purposeful performance sabotage. Would the AMD fanboys be happy with that solution? It seems like they might. The thing is to date all the GWs features have been optional, so AMD users already have this, but they also have the option of running GW if they want to.


And frankly, if AMD and their loyal fanbase really kick up enough fuss over this that is the exact route Nvidia will take., like they have done with GPU PhysX. Nvidia will work with developers to incorporate GWs features that only run on Nvidia hardware. That is completely fair game,you pay for a Nvidia GPU and Nvidia helps its customers with added fancy effects.

AMD is then free to work with developers on adding their own version.

And the developers are free to pick and choose what they want.

with their market share that would probably be seen as anti-competitive practise so they probably would face significant eu fines if they went this route. Honestly I dislike NVIDIAs business practises, but it seems odd that as a smaller company something can be totally legit but as a bigger company it's not. I guess things have to be that way to give smaller companies a chance, but it's still an odd world.
 
Well if that is what you really want so much then you basically already have because GW features are already optional, so if you don't like it then you can turn it off. Most of the review sites already do some tests with GW and without.

He's not talking about not having an alternative. The majority of GW effects can be done in other ways. How about having Dev's build there games without using GW and using all the alternate visual quality stuff in it and then let Nvidia gameworks do there own bundle of joy for there users. Nvidia wouldn't want that would they.
 
If we get games that run smoothly 99% of the time and the driver tweaks were just that...Tweaks to make them run just that little bit better then I would love Nvidia to keep GW to itself. Not having Phys-X has never bothered me one iota.

As we know, a lot of games are being sponsored by Nvidia and are running GW. Every game that runs GW has had problems. AMD only sponsor a handful of games but those games have run on all cards very well with few issues. Battlefront springs to mind (ran well apart from a few graphical issues which have now been sorted). The Tomb Raider reboot in 2013 had a few initial problems but runs well overall. Nvidia hijacks ROTTR and what....we get stuttering in cutscenes and in gameplay...etc..etc..

I have had enough of diease that is GameWorks coz it infects everything it touches. Nvidia, please keep your tech to yourselves. Please. If I wanted GW I would have bought an Nvidia card. All I want are games that run properly, so that I can enjoy them.
:)
 
He's not talking about not having an alternative. The majority of GW effects can be done in other ways. How about having Dev's build there games without using GW and using all the alternate visual quality stuff in it and then let Nvidia gameworks do there own bundle of joy for there users. Nvidia wouldn't want that would they.

Thing is not every developer can manage it, problems arise and features, even entire engines are scrapped. Then add in a severe deadlines, and suddenly an Ad Hoc solution that can be "dropped in" and customised to fit seems rather nice.

Many know the Witcher 3 already had it's hair physics that they show cased years ago, but in the end they went with Hairworks. They obviously had their reasons, and considering they already had to scrap their entire Rendering Engine; we can clearly see they had severe developmental and timeframe issues.

It doesn't seem weird then that they might have had issues elsewhere as well, that made them rely on the best second option available. They already used a 3rd party assets streaming product called Umbra 3, so using Hairworks isn't out of place considering it could also do Fur on animals and monsters easily enough for them.

So yes, there are otherways; CDPR tried it and it just didn't work for them. Then again, unlike Ubisoft or Square Enix they're miniscule and had a tiny budget. Is it then alright for these massive AAA development houses to also take these easy to use but not so nice on performance solutions?
Possibly not; as I think they really can do most things on their own. There are situations though that we as consumers might never get the full story about that can lead to the implementations. Although I suspect with cases like Ubisoft, the upper management simply want the most cost effective and time saving implementations. Which until now was NVIDIA Gameworks.

I hope with GPUOpen, more and more developers at least try and use AMD's open libraries to craft their own unique solutions; just like CD did with Pure Hair. It's simply a better solution than the Ad Hoc Gameworks' features, although might not always be as great as an In-house custom solution.
 
Last edited:
The thing is Nvidia can easily just stipulate that Gameworks cannot run on non-Nvidia (i.e. AMD) hardware and Nvidia will simply concentrate on optimizing GW for their own architecture.

That strait away nullifies your concern. Nvidia is then not guilty of doing anything that hampers AMD's performance and are free to do what they like with GW.

This way Nvidia is providing an incentive to buy Nvidia hardware without
any question of purposeful performance sabotage. Would the AMD fanboys be happy with that solution? It seems like they might. The thing is to date all the GWs features have been optional, so AMD users already have this, but they also have the option of running GW if they want to.


And frankly, if AMD and their loyal fanbase really kick up enough fuss over this that is the exact route Nvidia will take., like they have done with GPU PhysX. Nvidia will work with developers to incorporate GWs features that only run on Nvidia hardware. That is completely fair game,you pay for a Nvidia GPU and Nvidia helps its customers with added fancy effects.

AMD is then free to work with developers on adding their own version.

And the developers are free to pick and choose what they want.
Considering Nvidia do not do that and still do what I outlined then it doesn't really alleviate anything. When they do actually do this (such as the case with physx) I don't complain as long as it's not damaging to the other side. Running yourself into a fantasy of what Nvidia could do and claiming that nullifies any issues when that isn't how they operate is a bit of a funny argument but oh well. And to comment on your other point, this approach could still have left some code (as it still needs to interact and work with the rest of the game) that could be harmful to AMD.

It's not about fanboys or not but I can see what childish angle you are posing your argument from if you try and label everyone a fanboys for fair points while promoting fantasies as reality. To reply to it though, I'd rather they didn't close it off as Nvidia only, we can still use it and play the game regardless of performance impact but I feel moving to open source is the better solution as I clearly highlighted as this prevents AMD from being forced out of being able to optimise for games. GameWorks is a good thing, it's just being delivered by a competitor who is keeping it closed source so that causes harmful and anti-competitive blockades for AMD.

Nvidia will decide what they do and if they choose to restrict the use of there software then I'd be disappointed indeed, I'd rather they just took the mature and honest path of becoming open source so they can benefit everyone. If Nvidia aren't that kind of company then I'll choose to support AMD more because they are. The idea you proposed of AMD having to counter Nvidia also leads to the obvious arms race of titles being gimped for competitors over and over again and dilutes proper ability to optimise for titles. It'll be a very hit and miss thing for reach side and a poor experience for consumers. It might benefit Nvidia but any informed consumer should be smart enough to recognise what trends would be harmful for themselves and cause a poor future for the industry. If Nvidia are that kind of company then it's up to individuals to decide whether they want the extra effects at the cost of potentially seeing the industry take a downhill approach over time. I prefer to buy with the future in mind but not everyone does, no one's crying or fan boying here but just making factual observations. It's up to you guys to decide which cards you want but don't be dumb enough to suck up PR and assume it's all to your benefit without having a thought in your head other than what they tell you to think.
 
Last edited:
thank god, i was just imagining that most of GW titles run like crap on AMD, so much so that a pattern was starring me at my face, and poiting at gameworks.
weird hein !

The pattern is many AAA games are shoddily coded right now and Nvidia is working with more developers than ever before so it is bound to be the case that many games with GW are just junk, thats life. It has nothing to do with AMD, the games are an incomplete bug-fest on all platforms.
 
Back
Top Bottom