• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Concerned by Nvidia's current state of DX12 and Vulkan performance

Are you sure about that? http://wccftech.com/amd-vega-10-vega-11-magnum/ - we could see a paper launch and limited availability pretty soon (pretty sure they want to get Vega 10 to market before the GTX 1080 Ti, and avoid a repeat of last year with the GTX 980 Ti pooping on the Fury X parade).

My guess is if the so called big vega is 12 tflops it ain't bad but they need better or it will be a repeat of last year with the fury-X. After all that was a supposed 9Tflops but was out performed by the 6Tflop 980Ti in almost every situation.

If Nvidia get a 1080Ti out at the same time AMD get their big chip out its going to be last year all over again. If nvidia get the Ti out before AMD, they done lost. They need their vega card out ASAP to get the best lead.
 
My main point was to not rely on just one game and yet that's exactly what you are doing. I run a 1080 as well, everything in 1440p, everything maxed and the quality is outstanding. How does that fare against your comments?

It’s a real-life example and the only game I’m playing now so the only one that is relevant to me. Which comments are you referring to? My feeling is that users of expensive cards are prone to claim excessive performance and your post is very interesting in that regard.

It’s the only DX12 game I’m playing that makes me want to upgrade but when I ask users of 1070s and 1080s how performance is in-game, one of them says that they have to turn lots of settings down and another says they can max the game out – and not just on this forum – so something is amiss. OCUK have a 1080 for 549 now which is actually good value for nvidia plus since the €/£ rate is unreal at the moment that saves me even more so it’s actually starting to make sense price/performance wise. Reviews are fine but I prefer to also have the real world experience from, inter alia, OCUK users and this type of thread can be helpful. The problem is it feels like some people have invested so much in high-end cards that they make the performance out to be better than it really is, which is doubly-ingenious of nvidia, when you think about it:

http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,9.html

Guru3d benched the 1080 @1440p at 57FPS (DX12) on High settings (no MSAA) –never mind Very High or Ultra. No card can max the game out at 1440p with stable performance – except for yours, of course. You see the problem.
 
What do you mean DX12 is a double-edged sword? Apart from the fact that I will have to downgrade to Windows 10 for it.

One edge: you get the increase power, control + removal of abstraction layers increasing draw calls and getting more out of the hardware.

Second edge: As Ben Parker said "with great power comes great responsibility" The graphics programmers on a game project have to handle a lot of what the driver used to do, increasing the man hours and/or skill needed. So cost to get to DX11 levels and even the most out of a direct to metal API increases even if you have the capability.

Is fine on a fixed hardware device like a phone its ok(you know what hardware each model has) but on a PC you don't know what GPU/CPU/RAM etc the target machine will ever have in the real world. Also as soon as a new set of cards come out you have to go back to stop your game running like ass.

In short DX12 is great if you have the time/money/resource to imperilment it and implement it well but for a lot of teams they don't/won't and stick to DX11. I know a couple of graphics programmers (one at a small Japanese studio and a 2nd in a AAA RTS studio) and this is to sort of things they are telling me.
 
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s

IMO DX12 and even Vulkan are likely largely going to be a failure.
 
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s

IMO DX12 and even Vulkan are likely largely going to be a failure.

Indeed. Baby, bath water
 
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s

IMO DX12 and even Vulkan are likely largely going to be a failure.

And 10 year old DX11 is the future?

You're already way behind and wrong with that prediction. :)

DX12 / Vulkan is the same thing used by everyone in Consoles, its not going to be a failure if its already the standard.

Its like this: DX11 isn't going to serve Nvidia much longer as it already isn't AMD, GPU's will keep getting more powerful and with that massively overclocked CPU's are no longer going to allow those GPU's to stretch their legs.

Even with the Pascal TX they are already having to run the fastest OC CPU's they can lay their hands on to make those slides look good.
 
My guess is if the so called big vega is 12 tflops it ain't bad but they need better or it will be a repeat of last year with the fury-X. After all that was a supposed 9Tflops but was out performed by the 6Tflop 980Ti in almost every situation.

If Nvidia get a 1080Ti out at the same time AMD get their big chip out its going to be last year all over again. If nvidia get the Ti out before AMD, they done lost. They need their vega card out ASAP to get the best lead.

I think that was just due to nvidia having an architecture and drivers more suitable for DX11 (AMD have been pursuing parallelism for quite some time now). In DX12 workloads that difference seems to pretty much disappear, so I'd argue that in DX12 titles they have pretty much an equal foothold. Going to be interesting to see how the new "Fury" performs in DX12 titles.
 
Not to use this to troll Nvidia as some might think.
This illustrates the problem coming down the line for Nvidia, the same problem that has already plagued AMD in DX11.

Nvidia must up their DX12 / Vulkan feature level compatibility as using Pre-Emption alone (Already very effective for Nvidia in DX11) is not going to work for them indefinitely.

Pre-Emption is what Nvidia call 'A-Synchronous Compute', its not the same thing AMD call A-Synchronous Compute but Nvidia like you to think it is.

Pre-Emption has its limits.



o7te_ES.png
 
And 10 year old DX11 is the future?

You're already way behind and wrong with that prediction. :)

DX12 / Vulkan is the same thing used by everyone in Consoles, its not going to be a failure if its already the standard.

Its like this: DX11 isn't going to serve Nvidia much longer as it already isn't AMD, GPU's will keep getting more powerful and with that massively overclocked CPU's are no longer going to allow those GPU's to stretch their legs.

Even with the Pascal TX they are already having to run the fastest OC CPU's they can lay their hands on to make those slides look good.

Not saying DX11 is the future - not sure where you got that idea - I think we'll see another revision of the APIs eventually that give mainstream developers much better choices of when and where they get their hands dirty and/or can go in later once things are up and running to fine tune lower level performance rather than force them to deal with all the driver level/memory management, etc. from the start - and/or eventually someone will build a comprehensive wrapper which won't have some of the benefits of the underlying API but allow developers more flexibility in approach and less time and effort into reinventing the wheel.

Its a common misconception that everyone is working at metal on consoles - many many developers are simply utilising off the shelf engines and rarely going that deep themselves except sometimes for specific features and even on consoles there is much less reinventing the wheel for the engine devs who spend a lot of time implementing stuff at lower level compared to the requirements on PC. This whole thing about how developers want to be closer to the hardware is a load of BS hyped up by invested parties and applies to like maybe 1% of game developers.
 

When you change to higher resolution and not CPU bound or changing to a faster CPU the Pascal cards blow the Fury away like its not even funny - there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now. Also much of the nVidia issues with regard to CPU performance and driver level feeding of stuff like shader queues can effectively utilise the extra cores on i.e. 6 core (12 thread) Intel CPUs so they aren't dependent on ever faster CPUs for quite some time yet.

EDIT: Some of what you say applies quite a lot to Maxwell cards though - which is why I've not been a fan of the advice to buy the 980ti over the 1070 - in future DX12/Vulkan stuff the 980ti will lose out considerably to the FX and Pascal cards IMO as this seems to suggest:

VVDDgic.png


(Also due to the nature of the bottleneck overclocking Maxwell won't gain anything like as much as it does in DX11).
 
Last edited:
^^ What most developers (exceptions being people like Carmack) actually wanted was better access to the lower level stuff (and better multi threading) but still largely working in a higher level abstraction layer for the most part - what they got is largely making developers reinvent the wheel including creating the raw resources from scratch :s

IMO DX12 and even Vulkan are likely largely going to be a failure.

DX12 and Vulkan will take off once we see something like Unreal Engine 5 with a completely rebuilt rendering engine designed from the ground up to make use of the new APIs.

Until then it's really stuck in the realms of studios with hotshot graphics programmers with the luxury of a mandate to write a new renderer. Most AAA games are built on successive iterations of legacy engines. It's going to take a while for 'native' DX12 renderers to become the norm.
 
DX12 and Vulkan will take off once we see something like Unreal Engine 5 with a completely rebuilt rendering engine designed from the ground up to make use of the new APIs.

Until then it's really stuck in the realms of studios with hotshot graphics programmers with the luxury of a mandate to write a new renderer. Most AAA games are built on successive iterations of legacy engines. It's going to take a while for 'native' DX12 renderers to become the norm.

Yeah but as you say a lot hinges on people who make stuff like the unreal engine and its not the case of the hype where every developer is chewing at the bit to get closer to the hardware - DX12 and Vulkan are almost the opposite to how most mainstream game developers tick who want to quickly prototype stuff up and running initially and then go back and perfect it rather than having to spend a lot of time with the nuts and bolts perfecting it initially - which again is the realm of a small number of people who are at the top of the game.

EDIT: Even then DX12 has gone completely the wrong way about it - people like Carmack like to fairly rapidly prototype code and then go back and get lower level with it later - if you look at some of the past id game source you can see for instance depreciated functions where he has created them in plain old C originally then later gone back in and replaced them with hand optimised inline ASM.
 
Last edited:
When you change to higher resolution and not CPU bound or changing to a faster CPU the Pascal cards blow the Fury away like its not even funny - there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now. Also much of the nVidia issues with regard to CPU performance and driver level feeding of stuff like shader queues can effectively utilise the extra cores on i.e. 6 core (12 thread) Intel CPUs so they aren't dependent on ever faster CPUs for quite some time yet.

EDIT: Some of what you say applies quite a lot to Maxwell cards though - which is why I've not been a fan of the advice to buy the 980ti over the 1070 - in future DX12/Vulkan stuff the 980ti will lose out considerably to the FX and Pascal cards IMO as this seems to suggest:

VVDDgic.png


(Also due to the nature of the bottleneck overclocking Maxwell won't gain anything like as much as it does in DX11).

And so they should but doesnt that graph just show it even more? The fury with 9 Tflops of power finally does what it was supposed to do and is 11% faster than a 980ti?

So Vega with 12 Tflops and even more tricks under its hood "should" beat 1080s in games like this even with a super fast mutli core cpu. Using lesser cpus, Vega will pull much further ahead of Pascal.
 
When you change to higher resolution and not CPU bound or changing to a faster CPU the Pascal cards blow the Fury away like its not even funny - there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now. Also much of the nVidia issues with regard to CPU performance and driver level feeding of stuff like shader queues can effectively utilise the extra cores on i.e. 6 core (12 thread) Intel CPUs so they aren't dependent on ever faster CPUs for quite some time yet.

EDIT: Some of what you say applies quite a lot to Maxwell cards though - which is why I've not been a fan of the advice to buy the 980ti over the 1070 - in future DX12/Vulkan stuff the 980ti will lose out considerably to the FX and Pascal cards IMO.

there is a long way to go until nVidia has any of those problems - I'm not even sure AMD can catch up before what you are saying is true before nVidia is on like 2 future generations of hardware from now.
They already do, did you not look at that slide?
AMD are already miles ahead of Nvidia, Nvidia need a strong CPU to keep the performance up, AMD don't, how is that AMD needing to catch up with Nvidia in the context i'm using? its the other way round.


A stock FX-8370 is a lot slower than a 4.6Ghz Intel 6 core but how much slower to allow the Fury-X with proper A-Synchronous Compute to literally turn its performance position with the GTX 1080 on its head?
The Fury-X is so much faster than the GTX 1080 its not even funny.

The point is AMD have a lot more Draw Call performance overhead than Nvidia in DX12 / Vulkan.

Reduce this to 1080P. I bet that Fury-X will catch the 1080. it doesn't take a lot for AMD to catch and over take Nvidia, infact Nvidia have to work on staying ahead and try to make sure no one benches anything that isn't GPU loaded because if it isn't AMD overtake them.

Its the same sort of problem AMD have in DX11, they only do well if the load is all GPU side.

VVDDgic.png


o7te_ES.png
 
Last edited:
So Vega with 12 Tflops and even more tricks under its hood "should" beat 1080s in games like this even with a super fast mutli core cpu. Using lesser cpus, Vega will pull much further ahead of Pascal.

Look at where the 480 is - they narrowed some of the pipelines with Polaris and seems to be the direction they are taking in future - I suspect Vega will be a bit more sensitive to CPU performance than FX would suggest - atleast with Polaris they seem to have gone for the more sensible option of betting on both methods and everything seems to suggest its a future direction.
 
Not to use this to troll Nvidia as some might think.
This illustrates the problem coming down the line for Nvidia, the same problem that has already plagued AMD in DX11.

Nvidia must up their DX12 / Vulkan feature level compatibility as using Pre-Emption alone (Already very effective for Nvidia in DX11) is not going to work for them indefinitely.

Pre-Emption is what Nvidia call 'A-Synchronous Compute', its not the same thing AMD call A-Synchronous Compute but Nvidia like you to think it is.

Pre-Emption has its limits.



o7te_ES.png

Thats the only GOW4 chart where I've seen the Fury X be ahead. We shall have some benchmarks from users today, has anyone started a thread yet
 
Thats the only GOW4 chart where I've seen the Fury X be ahead. We shall have some benchmarks from users today, has anyone started a thread yet

Of course it is, its the only one who shows what happens when a high powered CPU isn't coming to Nvidia's aid.

You ain't going to see a lot of that either because its not what Nvidia want you to see.
 
Thats the only GOW4 chart where I've seen the Fury X be ahead. We shall have some benchmarks from users today, has anyone started a thread yet

Its ahead because the CPU it hilariously underpowered for a GPU like the 1080 and Humbug is right in that some aspects of the AMD architecture in Fiji are less dependant on CPU performance - once you change to a faster CPU the FX barely gains any performance while the Pascal cards take off.

nVidia have 2 things at play here - one they are using driver level scheduling of some DX12 functionality but also they tend to utilise their massively parallel and flexible shader architecture to brute force some functionality i.e. triangle setup that AMD GPUs implement as a fixed function block - which again means AMD's GPU is less sensitive to CPU as it can hand those tasks off and forget about them while nVidia is pumping through high tick rate worker threads.


EDIT: I think people are likely to be severely disappointed though if they think this is coming to AMD's aid any time soon.
 
Last edited:
Back
Top Bottom