• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Probability over the next 4+ years that DX12 Feature level 12_1 wont deliver tangible benefits?

Associate
Joined
30 Jul 2007
Posts
1,281
Im considering the long term (4+yr) value of a cheaper card as opposed to maxwell 2 purchase in relation dx12 feature level support. (Fury/GCN 1.2 is 12_0 it would seem?)

Is it reasonable to predict any of
  • that 12_1 feature levels wont be adopted in games in a significant way because consoles dont* support these feature levels (or another reason)?
  • that if 12_1 feature levels were adopted, that they could could be emulated to no significant detriment on a high end CPU?
  • that supporting these features levels is of no benefit in general or to the current class of hardware (eg dont have the horsepower to use)?

Im wondering how much value to ascribe to Feature level 12_1 support over the long term....

*(assumption based on the age/class of gpu).

thanks
 
OYztGl1.png

the dx12 api lead presented FL12.1 on a slide at GDC...

https://channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance
 
remember games aim for the installed GPU base not the currently selling GPUs.

this make sense, its just that next month my 2GB 5850 will be 5 years old;

in four-five years a card i buy tomorrow probably will be mixed into the 'installed gpu base' with the next gen 16nm gpus+ following 4 years (all supporting FL 12.1 (presumably))....

in the past i think point releases were not announced until after the initial release..here we have a point release already announced prior to the initial release...
 
Last edited:
Todays gpu you buy for 2 years or so. the die shrink which happens next year will take about a year to mature. Once it matured you have midrange cards offering HBM2 and small factor and similiar or more performance than todays entusiast.

a game title takes around 18months to 4 years to make on a new engine which well 4 years is a long way to go

i am sure that the current gen gpus can last until the end of the current console era @1080p. Given the improved specification of pc gpus and the limited console resolution. So i am planning for a purchase to last another 5 years.
 
No not that long.

It was two years after Direct 3D 10 launched before games started properly using it and three before they were widely using it, so 2-3 years for D3D 12 sounds about right

Whilst there is always going to be a lag, M$ seem to be billing DX12 as something quite different, akin to when cpus went from 1 to multiple cores.

in GDC video MS alluded to goal/prediction of 66% steam users on DX12 compatible hardware at launch..IF FL 12.1 was a prerequisite this number would be significantly lower.

IF the difference between pre dx12 and dx12 is 'night and day' and the user base has compatible hardware, and theres synergy with xbone environment, then maybe the takeup will be quicker than before.

It make sense that developers who have the resources to rework engines for clear performance/visual advantages/platform synergy? will want to do so asap.
 
As others have said, order independent transparency is a vey big deal and should increase performance.. Most modern game engines employ differed rendering which is much more efficient than forward rendering since only screen pixels end up being processed in pixel shades as opposed to all the hidden fragments. The problem is differed rendering makes it impossible to do transparency so to do that an additional processing pass is required that renders the transparent surfaces on top of the opaque background. OIT will allow developers to send transparent surfaces to the GPU out of order and have the GPU render the transparency in the correct order within a single pass.



The main thing that I would look out for going forwards is vram. with DX12 games wont be limited by the number of draw calls, therefore the number of unique objects rendered on the screen can increase dramatically. At the movement the DX11 API is very inefficient so only a certain number of render instructions can be sent form the CPU to the GPU, and therefore there can only be a few thousand objects at most, form mountains, plants, trees, buildings, cards, sign posts, planes, people, animals, everything. TTo make things like a convincing forest it isn't possible to have that number of unique trees therefore developers use a technique called instancing so the same draw call can end up essentially copying the same object in multiple locations. With DX12 hundreds of unique different tree types could be rendered, along with hundred of different types of grass blades, plants, flowers etc.
This of course means instead of 1 or 2 models getting duplicated hundreds of times there are now hundred of different models with unique assets like texture maps, normal maps, geometry. This will push vram usage up dramaticly.

The other driving force will be high res textures for 4K use, we already see this in some games where 4GB cram is surpassed and causes stuttering on cards with insufficient vram.

This makes sense...Although its like divining the tea leaves, im getting the 'feeling' from the ocuk forumers.... that on balance the greater memory is going to be more valuable than FL12.1 support...probably in part because i wont pay £600 for a 980ti...and am thinking an 8GB 290/390 maybe a sweeter spot for 1080p 30fps dx12 (and dx11) gaming over 5 years than a 970/980...
 
You have to wonder the reason why AMD/NVIDIA are not talking about DX12 readiness/performance more...

Possible reasons might be
  • By Agreeement with M$ eg. so as not dilute the dx12 compatibility/benefits message
  • Neither party has a clear home run DX12 advantage and they dont want to hurt current sales.
 
Back
Top Bottom