• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announces GPUOpen - Open Sourced Gaming Development

Isn't the question more around the ability of everything below the 980Ti to handle Async Compute? The ATOS (or whatever) benchmark showed the 980Ti doing fine but the cards below it in the line-up seemed to do quite a bit worse as I recall.

The ATOs benchmarks had DX12 slower than DX1 and lots of other ridiculous performance issues and is a heavily AMD biased engine used for AMD marketing so I really wouldn't read anything in to ATOS.

Cards below the 980Ti may fall back in Fable because they have less fragment shader performance.
 
He's not talking about not having an alternative. The majority of GW effects can be done in other ways. How about having Dev's build there games without using GW and using all the alternate visual quality stuff in it and then let Nvidia gameworks do there own bundle of joy for there users. Nvidia wouldn't want that would they.

The whole friggin point of Gamesworks is Developers don't have the resources to build their own in house solution.!

Is it really that hard a concept to get grasp?

You have a choice of a bug ridden rushed-out game with vanilla visuals.
Or you take the same game and optionally have some additional eye-candy for no additional cost. Whether you are on AMD or NVidia you can disable the additional visuals.
 
So with all that in mind can anyone give us the answer to why Async was available to Xbone on ROTTR and left out of the PC version?
Ask the developer, only they know. Anything else is just ridiculous conspiracy theory idiocy.

Would it be that far a stretch to say that Nvidia possibly asked CD to leave out anything that would give AMD cards a performance advantage at this point in time as they did with Ashes Of The Singularity???

Yes, it would be far stretched with absolutely no evidence what so ever.

There could be any number of possible reasons:
*) The xbox needed Async in order to hit the minimum frame times, PCs don't
*) The developer wants to release a DX11 game that can run on windows 7 computer
*) The developers aren't confident and experienced with DX12 on windows and prefer the mature DX11 platform
*) They tried DX12 and found it generally slow and harder to develop for.
*) They had a lot of technologies that were built around DX11. For example, TressFX is DX11 only, there is no DX12 version.
*) They found DX12 drivers to be buggy and immature.
 
If we get games that run smoothly 99% of the time and the driver tweaks were just that...Tweaks to make them run just that little bit better then I would love Nvidia to keep GW to itself. Not having Phys-X has never bothered me one iota.

As we know, a lot of games are being sponsored by Nvidia and are running GW. Every game that runs GW has had problems. AMD only sponsor a handful of games but those games have run on all cards very well with few issues. Battlefront springs to mind (ran well apart from a few graphical issues which have now been sorted). The Tomb Raider reboot in 2013 had a few initial problems but runs well overall. Nvidia hijacks ROTTR and what....we get stuttering in cutscenes and in gameplay...etc..etc..

I have had enough of diease that is GameWorks coz it infects everything it touches. Nvidia, please keep your tech to yourselves. Please. If I wanted GW I would have bought an Nvidia card. All I want are games that run properly, so that I can enjoy them.
:)



Simple solution, turn off the games works option int eh game menu. Job done, no need to complain any more and you get the same experience as if Nvidia never helped the developer.
 
The ATOs benchmarks had DX12 slower than DX1 and lots of other ridiculous performance issues and is a heavily AMD biased engine used for AMD marketing so I really wouldn't read anything in to ATOS.

Cards below the 980Ti may fall back in Fable because they have less fragment shader performance.


Correction : Nvidia cards below the 980Ti may fall back in Fable because they have less fragment shader performance or whatever. All the AMD cards below FuryX are beating their Nvidia counterparts quite comfortably in Fable DX12. Not sure if it is shader performance since Nvidia shaders are supposedly faster.

http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2


11bkodf.jpg
 
Last edited:
Is it really that hard a concept to get grasp?

Is it really that hard a concept to grasp that Nvidia's own customers wouldn't trust them with a bargepole when push comes to shove.

Non apologies, class action lawsuits, disabling overclocking....
 
Correction : Nvidia cards below the 980Ti may fall back in Fable because they have less fragment shader performance or whatever. All the AMD cards below FuryX are beating their Nvidia counterparts quite comfortably in Fable DX12. Not sure if it is shader performance since Nvidia shaders are supposedly faster.

I wonder why they didn't also test the GTX 980. I assume it would slot in between the 290X and FuryX though, which means it lines up rather well performance wise in general.
 
Last edited:
A 390x in there would be giving the Fury X a run for it's money as its usually around 10% or more faster than the 290x.

Results from Pcper include the 980:

http://www.pcper.com/reviews/Graphics-Cards/Fable-Legends-Benchmark-DX12-Performance-Testing-Continues/Results-1080p-Ultr

The 980 is slower than the 390X and the 67fps of the 980 makes it about the same as a 290X/390.

Ah right on with the 390X, seems it does fit around with the 290X if it's a stock card. All in all not to bad for the lower NV ones without A-Sync.

It's a bit concerning how quickly Mid-range cards from both companies plummets performance though.

Hopefully with Polaris and Pascal they'll taper down more effectively with DX12. Here's hoping anything in that range isn't a rebrand from the current line up.
 

I've read that before. There is some very good information, some bad information, and a lot of conjecture, which as I said does not amount to proof.

It is well known the async compute on Maxwell is a hybrid software-hardware approach, but that does make it slower or a limitation. It gives Nvidia a lot more flexibility in the scheduler and was done this way to support CUDA and industrial compute applications.

As for the context switching, it is much more complex than that article describes. It is not always a problem, it is just something the developer has to be aware. Context switching is done by all mutli-threaded applications on CPUs for instance, and in general there isn't a problem. Developers just have to be much more careful what they are doing under DX12 and take into account architectural differences. an async shader might run well on one GPU but not a different, just like developers can make heavy use of tessellation on NVidia but not AMD.
 
Results from Pcper include the 980:

http://www.pcper.com/reviews/Graphics-Cards/Fable-Legends-Benchmark-DX12-Performance-Testing-Continues/Results-1080p-Ultr


2d99paf.jpg


The 980 is slower than the 390X and the 67fps of the 980 makes it about the same as a 290X/390.

The 380 is getting 60% of the 390X performance
the 960 is getting 60% of the 980 performance

I really don't see any evidence there is any architectural differences in DX12 capability here at all. Fable legends has a big drop in performance when going down to mid-end cards. Maybe that is a bug in the game, or maybe there are features of higher end cards like absolute bandwidth of fragment shaders that really come in to play.
 
I've read that before. There is some very good information, some bad information, and a lot of conjecture, which as I said does not amount to proof.

It is well known the async compute on Maxwell is a hybrid software-hardware approach, but that does make it slower or a limitation. It gives Nvidia a lot more flexibility in the scheduler and was done this way to support CUDA and industrial compute applications.

As for the context switching, it is much more complex than that article describes. It is not always a problem, it is just something the developer has to be aware. Context switching is done by all mutli-threaded applications on CPUs for instance, and in general there isn't a problem. Developers just have to be much more careful what they are doing under DX12 and take into account architectural differences. an async shader might run well on one GPU but not a different, just like developers can make heavy use of tessellation on NVidia but not AMD.

Which parts are bad ?

From reading that it seems its if you try do separate things at the same time, which is what the async bit denotes to me is what causes the problems, It's all very well that it does the various task but if the queue has to be 1 particular task all the time it's not really asynchronous is it.

So if there is a lot of asynchronous stuff to do that's when the maxwell cards will fall over as it's needing to do the said tasks in ordered fashion.

Having to do something via software is generally a lot slower than at a hardware level, no ?

I'll be honest most of it is over my head but that's what I'm managing to understand.

Just a straight up no BS answer from Nvidia when all this kicked off would have been nice "Can maxwell do asynchronous Compute? Yes or No "

Edit: Gone way off topic, Sorry.
 
Last edited:
Just a straight up no BS answer from Nvidia when all this kicked off would have been nice "Can maxwell do asynchronous Compute? Yes or No "

There isn't a straight up answer to that and especially not one that wouldn't fuel more speculation and misunderstanding.

End of the day anyone who is caring too much about DX12 performance should be looking beyond these generations anyhow - none of them have been built optimally for DX12 performance - while AMD's architecture is a little more optimal for it 28nm is hamstringing them and Maxwell's main focus has always been DX11 performance.
 
If we get games that run smoothly 99% of the time and the driver tweaks were just that...Tweaks to make them run just that little bit better then I would love Nvidia to keep GW to itself. Not having Phys-X has never bothered me one iota.

As we know, a lot of games are being sponsored by Nvidia and are running GW. Every game that runs GW has had problems. AMD only sponsor a handful of games but those games have run on all cards very well with few issues. Battlefront springs to mind (ran well apart from a few graphical issues which have now been sorted). The Tomb Raider reboot in 2013 had a few initial problems but runs well overall. Nvidia hijacks ROTTR and what....we get stuttering in cutscenes and in gameplay...etc..etc..

I have had enough of diease that is GameWorks coz it infects everything it touches. Nvidia, please keep your tech to yourselves. Please. If I wanted GW I would have bought an Nvidia card. All I want are games that run properly, so that I can enjoy them.
:)

+1
 
If we get games that run smoothly 99% of the time and the driver tweaks were just that...Tweaks to make them run just that little bit better then I would love Nvidia to keep GW to itself. Not having Phys-X has never bothered me one iota.

As we know, a lot of games are being sponsored by Nvidia and are running GW. Every game that runs GW has had problems. AMD only sponsor a handful of games but those games have run on all cards very well with few issues. Battlefront springs to mind (ran well apart from a few graphical issues which have now been sorted). The Tomb Raider reboot in 2013 had a few initial problems but runs well overall. Nvidia hijacks ROTTR and what....we get stuttering in cutscenes and in gameplay...etc..etc..

I have had enough of diease that is GameWorks coz it infects everything it touches. Nvidia, please keep your tech to yourselves. Please. If I wanted GW I would have bought an Nvidia card. All I want are games that run properly, so that I can enjoy them.
:)

Every GW game has problems yet AMD sponsored dont.... Your ignorance to GW shows when you think that physx is all it is by saying youre not bothered by not having it. Physx is a fraction of GW. GW as a library has a hell of a lot going on. The reason AMD do not have issues is simply due to the fact they do not have anything in the game to possibly go wrong. I'll even ignore BF4 for you. Hopefully even you can see the difference between GW and nothing.

I do admire you for buying all the GW games and trying them for yourself. Nvidia thanks you too.
 
Every GW game has problems yet AMD sponsored dont.... Your ignorance to GW shows when you think that physx is all it is by saying youre not bothered by not having it. Physx is a fraction of GW. GW as a library has a hell of a lot going on. The reason AMD do not have issues is simply due to the fact they do not have anything in the game to possibly go wrong. I'll even ignore BF4 for you. Hopefully even you can see the difference between GW and nothing.

I do admire you for buying all the GW games and trying them for yourself. Nvidia thanks you too.
Oh no, he's in a huff cos someone said something against Nvidia. Never saw it coming.

Still regardless of how many effects are in there or not, gameworks has produced a few titles that have been pretty poor on release regardless of whether you want to admit it or not. I'm not even saying they are all bad but there's a few clear examples.
 
Back
Top Bottom