• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD’s DirectX 12 Advantage Explained – GCN Architecture More Friendly To Parallelism Than Maxwell

Yep, he definitely has some sort of deep rooted hatred of Team Red bubbling under there.

Sad really :rolleyes:

You completely missed the point:rolleyes:

If you think coding to spec is all that matters then you will have absolutely no issue with games-works or a developer using massive amounts of tessellation and lets say 6GB of VRAM even if the same thing could be achieved with 3 GB. Because that is all coding to spec,which to you is the be all and end all.
 
Last edited:
DX11 dying you can not be serious !!!

The only API that has died recently is Mantle RIP.

As to NVidia drivers being poor recently, I would not argue there.

Having said that AMD drivers are just as bad for other reasons.

Witcher 3 on AMD cards is absolutely dreadful.

Witcher 3 is not that great on NVidia cards but it is more than 4x as fast as AMD cards @2160p

Witcher 3 maxed 2160p

On my AMD cards 18fps

On my NVidia cards 80fps

No I'm not serious about DX11 dying. IT'S DEAD.

The simple history of graphics API's is that when a high enough percentage of users have hardware that supports if the games industry at large supports it as standard. With Dx11 sub 2% of users had access to DX11 hardware so devs didn't pay it a lot of attention, same with every incarnation of DX.... except 12.

With DX12 basically 70-80% of all gpus sold in the past 3-4 years support it. It has built in industry wide support from day one. Every major AAA title is announcing support for DX12 and there will be few to no games which don't run noticeable smoother/faster on DX12. DX11 is dead, the change over to DX12 will happen in a matter of months not years. Yet another thread for game talks about DX12. Ubisoft, who are lets say anti gamer and anti spending money and anti everything... are going DX12 on the new Assassins Creed.

DX12 will take over entirely in AAA titles, and as these are the performance hogging titles that end up being benchmarked and are what people care about playing when they spend £200-4000 on graphics cards it's what actually matters.

If DX12 ONLY supported new cards, lets say only Maxwell V2 and Fury X, then 90% of games for the next 18 months wouldn't focus on DX12 heavily, and might be 6-12 months before we even got the first game. That is not happening here.

As for Mantle dying... from day one there was two options, Mantle stays or the entire industry top to bottom switches to low level API's... the latter happened, Mantle was passed to Khronos(what I also said would likely happen from day one). Mantle did precisely what it was meant to and ended up exactly where I predicted it would.

AMD drivers are bad... nope. Worked flawlessly from day one on Witcher 3 for me. No crashes, no instability, no performance problems. A single 290x was giving me north of 60fps at 1080p.

Regardless of company sli/xfire has issues, only really delusional people think otherwise. Single card is different, it's 'easy' and should pretty much just work. AMD, true, Nvidia these days worse than ever it seems but they've been dodgy for donkeys years. I still remember Bioshock 1, flawless on AMD from day one with a driver released 2-3 days before. Nvidia users, about 12 beta releases within the first week and all kinds of problems. This is not new for Nvidia, just goes through periods of being terrible and periods of being less terrible. On/off TDR crashing issues for Nvidia users for 2-3 years with thousands of posts on their forums and it came back again recently.

AMD users post everywhere, problem appears large, isn't. Nvidia users post on one tiny corner of the internet basically no one but Nvidia users go while proclaiming everywhere else they are flawless cards.
 
It seems like your changing your mind in the same thread here....So tell us, is it meaningless to code to a spec or not.

:p

Lol hilarious..DP is really clutching at straws. Must be getting tired working overtime. Who's shift is it next?

As far I know, AoTS does not have some proprietary AMD specific tech in there like Gameworks so should be a level playing ground.
 
Last edited:
No I'm not serious about DX11 dying. IT'S DEAD.AMD users post everywhere, problem appears large, isn't. Nvidia users post on one tiny corner of the internet basically no one but Nvidia users go while proclaiming everywhere else they are flawless cards.

Errrr your forgetting that Nvidia users also gate crash each and every AMD thread and try to trash that thread or derail it with inaccuracy and a venom like no other.

Yes totally agree about the levels of Hypocracy amongst Nvidia users with regard to their cards in different threads. Nvidia Driver thread on here is chock FULL of users screaming about bad drivers and they have the nerve to jump into AMD threads for years and trash AMD drivers.

Shockin' it is !! LOL :p
 
This has all gone a bit bizarre.

I can only assume dp is being tag team trolled as the level of reading comprehension failure is off the scale.


If you would kindly explain to me where I have failed to comprehend what D.P. was saying I will gladly take some or all of it back. If I have failed to get what he means then I am big enough to admit a mistake and hold my hands up.

Obviously us Team Red "Fanboys" work better in crossfire than you Team Green "Fanboys" when it comes to tag team. We seem to scale much better !!

;)
 
Read your own post...It certainly comes across to me that in one breath you are saying it's meaningless to code to a spec (In this case DX12) then a few lines down you are saying that Project Cars was written to the DX11 Spec, I thought that was all that matters.

:eek:

I used the Project CARS as example. Project CARS was written to DX11 spec it performed massively better on Nvidia hardware to the extent that AMD fabricated a bunch of lies about PhysX.

IF writing to spec was the only thing that mattered then there should be absolutely no concern about how much tessellation game uses should there? Or do you think that actually differences in hardware matter? If so, well done, you have leap-frogged over many other posters here and have engaged your Brain. Different architectures require specific optimizations, regardless of the spec. With DX12, these opimizations need to be done in the game engine, not the driver.

Writing to spec if more meaningless than it ever has been, because "spec" is more meaningless. The lower the API the less spec there actually is and the closer you interaction with the hardware, therefore you need to know how best to code for that hardware.


DX12 is a move in the complete opposite direction to the rest of the computer industry. While everything else is going to ever more abstracted higher layers where even middle level APIs are completely hidden. With Dx12 a game developer needs to know a heck of a lot about the underlying hardware, what the different GPUS are good at or not. this is why despite all the praise from the big names most of the industry are really not that excited, they would rather code in DX11 and let Nvidia and AMd take care of the optimization.
 
I thought we were talking about Ashes and DX12, how has D.P confused that with PCars and DX11?

If you don't understand the basics of debate then go away.

using an example is a standard form of debate. What you are doing is a standard form of trolling. Do it again and I will report you.
 
I used the Project CARS as example. Project CARS was written to DX11 spec it performed massively better on Nvidia hardware to the extent that AMD fabricated a bunch of lies about PhysX.

IF writing to spec was the only thing that mattered then there should be absolutely no concern about how much tessellation game uses should there? Or do you think that actually differences in hardware matter? If so, well done, you have leap-frogged over many other posters here and have engaged your Brain. Different architectures require specific optimizations, regardless of the spec. With DX12, these opimizations need to be done in the game engine, not the driver.

Writing to spec if more meaningless than it ever has been, because "spec" is more meaningless. The lower the API the less spec there actually is and the closer you interaction with the hardware, therefore you need to know how best to code for that hardware.


DX12 is a move in the complete opposite direction to the rest of the computer industry. While everything else is going to ever more abstracted higher layers where even middle level APIs are completely hidden. With Dx12 a game developer needs to know a heck of a lot about the underlying hardware, what the different GPUS are good at or not. this is why despite all the praise from the big names most of the industry are really not that excited, they would rather code in DX11 and let Nvidia and AMd take care of the optimization.

Okay D.P. I think I get what you are trying to say now. DX12 is putting more responsibility onto the developers and taking it away from the hardware manufacturers (please correct me if I am wrong on that). If that is the case and I have no reason to disbelieve your statement, then surely Nvidia, AMD and now Intel (I am inclined to think that these are the biggest players involved at the moment) have had a fair amount of time to at least understand what DX12 was all about and the direction it was going in. If MS were going in the wrong direction to the rest of the industry then surely someone would have said something and alarm bells would have been ringing all over.

I also imagine that game developers all around the world have not had their heads stuck in the sand during the development time of DX12 and would have pretty much known what it was all about too. I just cant imagine that they didnt.

So does that mean that Nvidia neglected their DX12 work in favour of what was the "Here and Now" with DX11?

Also if AMD did the opposite (it certainly seems likely) then it was a massive gamble to bet that far into the future and risk losing a large amount of market share, which they have and may never get back.

As for the hardware, Yes I believe it does matter and that Nvidia and AMD are going in slightly different directions with implementation and design. But not too different as both will be using HBM 2 at some point in the future.
:)
 
Last edited:
Okay D.P. I get what you are trying to say now. DX12 is putting more responsibility onto the developers and taking it away from the hardware manufacturers (please correct me if I am wrong on that).
Correct.

If that is the case and I have no reason to disbelieve your statement, then surely Microsoft, Nvidia, AMD and now Intel (I am inclined to think that these are the only ones involved at the moment) have had a fair amount of time to at least understand what DX12 was all about and the direction it was going in. If MS were going in the wrong direction then surely someone would have said something.
That is correct, DX12 has been in development since 2010. Nvidia, MS and AMD were discussing a lower level, mutli-threaded DX12 like API back in 2004. MS went ina direct that was supported by some game developers, Nvidia, AMD, Intel. They Also wanted a good API for their console.

I also imagine that game developers all around the world have not had their heads stuck in the sand during the development time of DX12 and would have pretty much known what it was all about too. I just cant imagine that they didnt.
No they knew exactly what was coming. The big names all welcomed this movement. the single threaded nature of DX was completely antiqued. The Draw call limit has been a issue since around 2000 when hardware TnL made big inroads and took away the CPU geometry bottleneck.

Others game developers are pretty indifferent, or excited about the new features like conservative rasterization or order independent transparency. The casual developers will be sticking with DX11 or OGL.

So does that mean that Nvidia neglected their DX12 work in favour of what was the "Here and Now" with DX11?
I don't understand the logical connection here. :confused:
You Have logic like this:
A implies B, Therefore C.

Where is the connect between A, or B and C?

But no, Nvidia has not neglected anything to do with DX12 [edited typo]. NVidia's GPUS are DX12 complaint all the way back to Fermi, they support more DX12 features than AMD, and MS have chosen Nvidia almost exclusively to demonstrate DX12. As AMD were marketing Mantle, Nvidia were working with Microsoft and launching demos such as Forza.

The Maxwell architecture has some very useful DX12 features, order independent transparency is a huge performance gain in a properly designed deffered renderer.

Also if AMD did the opposite (it certainly seems likely) then it was a massive gamble to bet that far into the future and risk losing a large amount of market share, which they have.

AMD didn't do the opposite. There is zero evidence to support that.,and it would b an incredibly stupid move to do. GPUs are sold based on current performance, not speculative future PR.

It is simply a case that AMD GPUS are different to Nvidia GPUS, they have pros and cons. IN this particular benchmark that is heavily optimized by AMD to run on AMD hardware the best the game run wells. You can't draw anything from those results.

THE GCN architecture has existed long before DX12, you do think AMD would happily go years and years and years before an PA would turn up that runs that architecture better? And if that architecture is so good why is AMD making a massive change to it with Greenland The next AMD GPU wil, have a revolutionary new architecture to improve efficiency.

As for the hardware, Yes I believe it does matter and that they are going in slightly different directions with implementation and design. But not too different as both will be using HBM 2 at some point in the future.
:)

Different hardware behaves differently and requires different code for optimal performance.


All I am saying is Ashes is completely unrepresentative of all future DX12 games. And the junk from overclock.net has been widely discredited by other industry insiders , by numerous game developers, ex-driver developers etc..
 
Last edited:
I don't understand why there's any sort of argument about the architecture etc.
I have no issue with a 290X besting a 980TI in DX12 (Although I'd expect them to reach parity, which is a great result for a 290X)

I have a problem with Nvidia's DX11 besting their DX12. It's almost stupidly clear that's not meant to happen?
 
I used the Project CARS as example. Project CARS was written to DX11 spec it performed massively better on Nvidia hardware to the extent that AMD fabricated a bunch of lies about PhysX.

IF writing to spec was the only thing that mattered then there should be absolutely no concern about how much tessellation game uses should there? Or do you think that actually differences in hardware matter? If so, well done, you have leap-frogged over many other posters here and have engaged your Brain. Different architectures require specific optimizations, regardless of the spec. With DX12, these opimizations need to be done in the game engine, not the driver.

Writing to spec if more meaningless than it ever has been, because "spec" is more meaningless. The lower the API the less spec there actually is and the closer you interaction with the hardware, therefore you need to know how best to code for that hardware.


DX12 is a move in the complete opposite direction to the rest of the computer industry. While everything else is going to ever more abstracted higher layers where even middle level APIs are completely hidden. With Dx12 a game developer needs to know a heck of a lot about the underlying hardware, what the different GPUS are good at or not. this is why despite all the praise from the big names most of the industry are really not that excited, they would rather code in DX11 and let Nvidia and AMd take care of the optimization.

DX12 is a low level API like Mantle but it does not mean devs have to learn how the Maxwell or Fiji architecture works. It will have various libraries and calls which access the hardware at a lower level but the programming is still going to be a higher level language such as C++ or whatever. All the low level access methods and code in the API is provided by the gpu manufacturers and is built into DX12 already. Everything low level is done under the hood so to speak.

If devs have to now learn how each gpu architecture does things then it's would hinder development, not help.
As an example of how easy it is to port to DX12 look at King of Wushu where 2 devs ported the whole game in just 6 weeks. Now did these devs learn how every gpu works?

http://blogs.nvidia.com/blog/2015/05/01/directx-12-cryengine/

King of Wushu, earmarked to be the first DX12 title in China, is also the first CryEngine-based game to take advantage of the next-generation graphics API.

It took two engineers just six weeks to port King of Wushu from DirectX 11 to DX12, and its performance improvements are stunning.
- See more at: http://blogs.nvidia.com/blog/2015/05/01/directx-12-cryengine/#sthash.8X00vZ00.dpuf


And as for your tessellation argument, I don't think the DX11/12 specs dictate how much tessellation, AA, or whatever a dev must use so any game can adhere to the API specs and still run badly on any given hardware.
 
Last edited:
I don't understand why there's any sort of argument about the architecture etc.
I have no issue with a 290X besting a 980TI in DX12 (Although I'd expect them to reach parity, which is a great result for a 290X)

I have a problem with Nvidia's DX11 besting their DX12. It's almost stupidly clear that's not meant to happen?

The only issue is if someone tries to extrapolate performance from this one benchmark to all future DX12 games, which some people are doing.


And yes, the fact that t=Dx12 ran slower than Dx11 on Nvidia GPUs is just proof that the game engine is flawed.
 
DX12 is a low level API like Mantle but it does not mean devs have to learn how the Maxwell or Fiji architecture works. It will have various libraries and calls which access the hardware at a lower level but the programming is still going to be a higher level language such as C++ or whatever. All the low level access methods and code in the API is provided by the gpu manufacturers and is built into DX12 already. Everything low level is done under the hood so to speak.

If devs have to now learn how each gpu architecture does things then it's would hinder development, not help.
As an example of how easy it is to port to DX12 look at King of Wushu where 2 devs ported the whole game in just 6 weeks. Now did these devs learn how every gpu works?

http://blogs.nvidia.com/blog/2015/05/01/directx-12-cryengine/




And as for your tessellation argument, I don't think the DX11/12 specs dictate how much tessellation, AA, or whatever a dev must use so any game can adhere to the API specs and still run badly on any given hardware.

I don't think you understand whatDX12 does differently, I suggest you read the MS website. Of course it isn't speaking directly to the GPU, but it is a much lower level API that has much less abstraction meaning the developers have to take more care in what they are doing or they will lower performance.


As for your tessellation remark, you are just agreeing with me, thank you. I'm glad someone understand that coding within specs is meaningless.
 
The only issue is if someone tries to extrapolate performance from this one benchmark to all future DX12 games, which some people are doing.


And yes, the fact that t=Dx12 ran slower than Dx11 on Nvidia GPUs is just proof that the game engine is flawed.


Nvidia have been working with them for a year, the same length of time AMD have, Nvidia have access to the source code, Oxide are also using code Nvidia submitted to them. so how is it the devs fault but not Nvidia? Only Nvidia can get it working properly on their own GPU's.
 
And yes, the fact that t=Dx12 ran slower than Dx11 on Nvidia GPUs is just proof that the game engine is flawed.

Flawed is pretty harsh. A result you weren't expecting does not make the method inherently flawed - it could well be there are things behind the scenes that you don't yet understand.

As a "for instance;" DX12 in game A is doing more of X task than in DX11. Company N's cards operate poorly in X task. Thus DX12 result lower. Game B has less of task X in DX12, thus Company N cards perform better.

Is that a flaw in game A or highlighting a weakness in Company N cards previously unknown due to DX11? Or is Game A totally invalid because Game B offers a different result?

I think AOTS is a valid benchmark - but its results, as with ALL benchmark results, should be taken as PART of a whole. Anyone who looks at a single benchmark to draw conclusions is an idiot frankly, larger sample size is always required for any kind of accurate conclusion to be drawn.
 
Back
Top Bottom