Why people are so surprised that Nvidia performs better in engine like Frost Bite 3, Unity Engine 5, Unreal Engine 4 and Cry Engine.
Is that a question or a statement?

Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Why people are so surprised that Nvidia performs better in engine like Frost Bite 3, Unity Engine 5, Unreal Engine 4 and Cry Engine.
Are you sure about that? http://wccftech.com/amd-vega-10-vega-11-magnum/ - we could see a paper launch and limited availability pretty soon (pretty sure they want to get Vega 10 to market before the GTX 1080 Ti, and avoid a repeat of last year with the GTX 980 Ti pooping on the Fury X parade).
My main point was to not rely on just one game and yet that's exactly what you are doing. I run a 1080 as well, everything in 1440p, everything maxed and the quality is outstanding. How does that fare against your comments?
What do you mean DX12 is a double-edged sword? Apart from the fact that I will have to downgrade to Windows 10 for it.
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s
IMO DX12 and even Vulkan are likely largely going to be a failure.
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s
IMO DX12 and even Vulkan are likely largely going to be a failure.
My guess is if the so called big vega is 12 tflops it ain't bad but they need better or it will be a repeat of last year with the fury-X. After all that was a supposed 9Tflops but was out performed by the 6Tflop 980Ti in almost every situation.
If Nvidia get a 1080Ti out at the same time AMD get their big chip out its going to be last year all over again. If nvidia get the Ti out before AMD, they done lost. They need their vega card out ASAP to get the best lead.
And 10 year old DX11 is the future?
You're already way behind and wrong with that prediction.
DX12 / Vulkan is the same thing used by everyone in Consoles, its not going to be a failure if its already the standard.
Its like this: DX11 isn't going to serve Nvidia much longer as it already isn't AMD, GPU's will keep getting more powerful and with that massively overclocked CPU's are no longer going to allow those GPU's to stretch their legs.
Even with the Pascal TX they are already having to run the fastest OC CPU's they can lay their hands on to make those slides look good.
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s
IMO DX12 and even Vulkan are likely largely going to be a failure.
DX12 and Vulkan will take off once we see something like Unreal Engine 5 with a completely rebuilt rendering engine designed from the ground up to make use of the new APIs.
Until then it's really stuck in the realms of studios with hotshot graphics programmers with the luxury of a mandate to write a new renderer. Most AAA games are built on successive iterations of legacy engines. It's going to take a while for 'native' DX12 renderers to become the norm.
When you change to higher resolution and not CPU bound or changing to a faster CPU the Pascal cards blow the Fury away like its not even funny - there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now. Also much of the nVidia issues with regard to CPU performance and driver level feeding of stuff like shader queues can effectively utilise the extra cores on i.e. 6 core (12 thread) Intel CPUs so they aren't dependent on ever faster CPUs for quite some time yet.
EDIT: Some of what you say applies quite a lot to Maxwell cards though - which is why I've not been a fan of the advice to buy the 980ti over the 1070 - in future DX12/Vulkan stuff the 980ti will lose out considerably to the FX and Pascal cards IMO as this seems to suggest:
![]()
(Also due to the nature of the bottleneck overclocking Maxwell won't gain anything like as much as it does in DX11).
When you change to higher resolution and not CPU bound or changing to a faster CPU the Pascal cards blow the Fury away like its not even funny - there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now. Also much of the nVidia issues with regard to CPU performance and driver level feeding of stuff like shader queues can effectively utilise the extra cores on i.e. 6 core (12 thread) Intel CPUs so they aren't dependent on ever faster CPUs for quite some time yet.
EDIT: Some of what you say applies quite a lot to Maxwell cards though - which is why I've not been a fan of the advice to buy the 980ti over the 1070 - in future DX12/Vulkan stuff the 980ti will lose out considerably to the FX and Pascal cards IMO.
They already do, did you not look at that slide?there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now.
So Vega with 12 Tflops and even more tricks under its hood "should" beat 1080s in games like this even with a super fast mutli core cpu. Using lesser cpus, Vega will pull much further ahead of Pascal.
Not to use this to troll Nvidia as some might think.
This illustrates the problem coming down the line for Nvidia, the same problem that has already plagued AMD in DX11.
Nvidia must up their DX12 / Vulkan feature level compatibility as using Pre-Emption alone (Already very effective for Nvidia in DX11) is not going to work for them indefinitely.
Pre-Emption is what Nvidia call 'A-Synchronous Compute', its not the same thing AMD call A-Synchronous Compute but Nvidia like you to think it is.
Pre-Emption has its limits.
![]()
Thats the only GOW4 chart where I've seen the Fury X be ahead. We shall have some benchmarks from users today, has anyone started a thread yet
Thats the only GOW4 chart where I've seen the Fury X be ahead. We shall have some benchmarks from users today, has anyone started a thread yet