• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD R9 Fury X2 - Paper Launch This Month - Available Q1 2016 - But Could Face Potential Delays

Great post.

DX12 means developers will focus much more on GPUs that have the highest market share, and they will become more and more dependent on assistance form the hardware vendors and 3rd party libraries like gamesworks.

DX12 isn't done by GPU vendor, cards either support it or they don't. Of course there are feature levels but DX12 can't be optimised by GPU. 3rd party libraries like GameWorks and TressFX are then slapped on the top.
 
So basically - It's not a launch at all. It's an announcement.

It's an announcement of a future plan to release a card that you won't be able to buy until a later hard launch.

By 2020 we will be having briefs about these 5 generations in advance before actual products are about.
 
That's rather a extreme statement to make, remind me how many DX12 titles are out now or are due to be released in the next 12-18 months.

If anything Multi GPU in DX12 looks shakey. There's a shift away from the vendors on the driver front making games work (including multi GPU) to the game developers. Also the emphasis will also be more on the developer in maintaining support especially for new cards.

We saw this party with Mantel and Thief. The mantel patch came along it performed great for the AMD cards at the time as that's what game was programmed for but then the R285 came out and in Mantel and the performace was much worse in Mantel (you can see this here http://techreport.com/review/26997/amd-radeon-r9-285-graphics-card-reviewed/7) . This is not a dig at mantel but a worry about "to the metal" api performance in games post release with cards released after the game.

To the metal API's are great for the larger dev's who have the resources and the inclination to get the most out PC hardware but for smaller teams or just team/projects where PC is simply not the priority they are going to stick with DX11.

First off it's not extreme, there way of getting around performance issues is effectively working around DX11, so they put a lot of time and money into performance improvements(which win a lot of benchmarks) but the real world consequence is worse actual problems with games at launch because the driver has become much more complex. SO time and money spent to win benchmarks, give the user a worse experience and once all games are DX12 it's entirely wasted effort. No, not an extreme statement, a complete waste. I'm sure there are millions of Nvidia users who would have preferred a more stable play through of Witcher 3 than for some benchmark to be ultra optimised with a complex driver.

Their biggest real world gains from more efficient DX11 drivers have come in API benchmarks.... ooooo, worthwhile.


Second, using Mantle and 285 as reasoning for DX12 is fundamentally flawed. It was proof of concept, with extremely little work(devs have repeated this for all DX12/Mantle projects) they got it working with the available hardware. It wasn't a market encompassing solution, it was a beta API where they focused on a small subset of hardware and features. by the time the 285 was coming out DX12 was confirmed, Mantle development was done basically by that point. Devs don't go out there to support unreleased hardware at launch nor when they are effectively making a proof of concept to they need long term support.

AS for the larger/smaller devs, it's still based on the argument that it takes a lot of work for every card, it doesn't. However it's still wrong, large devs won't have issues, medium/small devs will mostly implement one of the larger engines available which will have DX12 support by default, the smallest devs making indie games aren't making games that are performance dependent in the first place, they aren't remotely limited by DX11. A lot of indie games come out using DX9, they aren't relevant to the discussion.

Great post.

DX12 means developers will focus much more on GPUs that have the highest market share, and they will become more and more dependent on assistance form the hardware vendors and 3rd party libraries like gamesworks.

Precisely and completely incorrect... as usual. DX12 in no way at all causes devs to become more dependent on 3rd party libraries. Was that just an excuse to mention gameworks and dependent in the same sentence. Also DX12/low level api, one of the fundamental driving features is precisely that it will take LESS assistance from hardware vendors to achieve what they want to do. A slimmer driver and no black box DX to cause lots of issues they can't identify themselves. Many/most of the problems with finding issues with games is that the dev has to talk with the hardware vendors and MS/DX people and they all have to work together to find some way to get it to work. With a slim driver, way more access and a low level API they can directly find and solve problems themselves with no waiting around for answers from e-mails.

This is actually where the smaller devs gain the most. Where lets say Rockstar are a huge important dev and will get priority access from AMD/Nvidia and help from MS re DX problems, small dev Indie Company B, will send a request for help that MS, AMD and Nvidia don't have time for but can't find a solution on their own. With DX12 a smaller dev has far more ability to fix it's own problems with far better transparency and better tools.
 
DX12 isn't done by GPU vendor, cards either support it or they don't. Of course there are feature levels but DX12 can't be optimised by GPU. 3rd party libraries like GameWorks and TressFX are then slapped on the top.

Rubbish. Supporting a feature doesn't mean it is supported at high speed.

Should a game developer put tesselation at the highest level that the 980TI can run, because tesselation is a DX standard and according to your logic all cards should support it just as well? If AMD cards aren't as fast at Tesselation as Nvidia cards does that mean AMD cards aren't DX compliant, or does ia mean DX specifications are irrelevant to GPU opimization?
 
First off it's not extreme, there way of getting around performance issues is effectively working around DX11, so they put a lot of time and money into performance improvements(which win a lot of benchmarks) but the real world consequence is worse actual problems with games at launch because the driver has become much more complex. SO time and money spent to win benchmarks, give the user a worse experience and once all games are DX12 it's entirely wasted effort. No, not an extreme statement, a complete waste. I'm sure there are millions of Nvidia users who would have preferred a more stable play through of Witcher 3 than for some benchmark to be ultra optimised with a complex driver.

Their biggest real world gains from more efficient DX11 drivers have come in API benchmarks.... ooooo, worthwhile.


Second, using Mantle and 285 as reasoning for DX12 is fundamentally flawed. It was proof of concept, with extremely little work(devs have repeated this for all DX12/Mantle projects) they got it working with the available hardware. It wasn't a market encompassing solution, it was a beta API where they focused on a small subset of hardware and features. by the time the 285 was coming out DX12 was confirmed, Mantle development was done basically by that point. Devs don't go out there to support unreleased hardware at launch nor when they are effectively making a proof of concept to they need long term support.

AS for the larger/smaller devs, it's still based on the argument that it takes a lot of work for every card, it doesn't. However it's still wrong, large devs won't have issues, medium/small devs will mostly implement one of the larger engines available which will have DX12 support by default, the smallest devs making indie games aren't making games that are performance dependent in the first place, they aren't remotely limited by DX11. A lot of indie games come out using DX9, they aren't relevant to the discussion.



Precisely and completely incorrect... as usual. DX12 in no way at all causes devs to become more dependent on 3rd party libraries. Was that just an excuse to mention gameworks and dependent in the same sentence. Also DX12/low level api, one of the fundamental driving features is precisely that it will take LESS assistance from hardware vendors to achieve what they want to do. A slimmer driver and no black box DX to cause lots of issues they can't identify themselves. Many/most of the problems with finding issues with games is that the dev has to talk with the hardware vendors and MS/DX people and they all have to work together to find some way to get it to work. With a slim driver, way more access and a low level API they can directly find and solve problems themselves with no waiting around for answers from e-mails.

This is actually where the smaller devs gain the most. Where lets say Rockstar are a huge important dev and will get priority access from AMD/Nvidia and help from MS re DX problems, small dev Indie Company B, will send a request for help that MS, AMD and Nvidia don't have time for but can't find a solution on their own. With DX12 a smaller dev has far more ability to fix it's own problems with far better transparency and better tools.

As usual a completely irrelevant wall of text I will debunk later.
 
I wonder if NVlink will solve these issues and make a big improvement to SLI.

NVlink has nothing to do with gaming or SLI at all, and it never will. There are no plans for any x86 system support at all, and likely won't be ever.

It's for ARM and IBM-Power server and workstation / supercomputer only.
 
Great post.

DX12 means developers will focus much more on GPUs that have the highest market share, and they will become more and more dependent on assistance form the hardware vendors and 3rd party libraries like gamesworks.

That's a hugely contradictory statement. The overwhelming majority of hardware with any kind of reasonable compatibility with DX12 is GCN. The numerical advantage of GCN over NVIDIA's various architectures (Pascal being the first that really supports DX12) will only grow as console sales accelerate and the NX releases.

Gameworks is the last thing anyone except the accountants at development houses will want to go anywhere near. That's true now and it's vastly more true with DX12 and Vulkan. 2 consoles now and soon 3 consoles are GCN, AMD PC is GCN ... NVIDIA is NVIDIA. The vast majority of the gaming market is GCN now, and NVIDIA will have a slow growth of Pascal (which is closer to GCN than Kepler / Maxwell).
 
Is this just opinion or do you have any proof/links to back it up? I look forward to reading on the intricate details of Pascal, and how it will be similar to GCN :)

One of the main changes is that AMD made from going to Terascale to GCN was the move to hardware scheduling which increased power consumption and die area,whereas Nvidia went from hardware scheduling to software scheduling from Fermi to Kepler and Maxwell,which reduced die size and reduced power consumption.

So,it could be that Nvidia will move functionality back to hardware - its one of the things regarding latency and VR(for example) that AMD are supposed to do better with,since they have more dedicated hardware to do certain things,especially since Pascal is trying to have more flexibility regarding things like compute and so on.

We might see Nvidia do something similar especially since they have trialled a lot of power saving tech with Maxwell and will have lower power memory and die shrink so they could throw more transistors at the GPUs.

It will be interesting to see how things pan out.

Edit!!

Anyway,I do wonder whether we will see any consumer 14NM/16NM cards by summer at this rate though!!

ARK needs it!! :p
 
Last edited:
One of the main changes is that AMD made from going to Terascale to GCN was the move to hardware scheduling which increased power consumption and die area,whereas Nvidia went from hardware scheduling to software scheduling from Fermi to Kepler and Maxwell,which reduced die size and reduced power consumption.

So,it could be that Nvidia will move functionality back to hardware - its one of the things regarding latency and VR(for example) that AMD are supposed to do better with,since they have more dedicated hardware to do certain things,especially since Pascal is trying to have more flexibility regarding things like compute and so on.

We might see Nvidia do something similar especially since they have trialled a lot of power saving tech with Maxwell and will have lower power memory and die shrink so they could throw more transistors at the GPUs..

It will be interesting to see how things pan out.

Thanks for shedding a little light on it :)

Agreed, am looking forward to the next-gen chips :cool:
 
Back
Top Bottom