• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anandtechs look at DX 12 in action with star swarm.

Ignoring the broken nVidia driver for DX11 :D

Interesting to see the difference between nVidia and AMD DX11 optimisation as well as the DX12 and Mantle stuff. Can't really "fix" DX11 but nVidia's driver level replacement of some functions seems to really give it the edge in DX11 in some cases by 4-5x.
 
a lot to look thru
early drivers but the frametimes look great to me, mirror copy of mantle lol, thats going to make games feel much better
i bet we still in for a wait before we see many games take full advantage tho :(
 
AMD's dx11 and 12 are clearly broken on that game - definitely not optimized, as the article says, since they obviously put all their efforts into Mantle on that game.

Should be interesting to see the 380/390X performance on mantle in this game :)
 
NP and its good to see older cards like 680 which is similar to my 670 doing well with dx12.

My only worry with dx12 is that a lot of current or decent older games won't use it as many devs will say dx9 or 11 is good enough and won't update em to dx12 and also say it costs too much or the engine is too much effort to update to it even tho i've read its not that hard to impliment it.
 
NP and its good to see older cards like 680 which is similar to my 670 doing well with dx12.

My only worry with dx12 is that a lot of current or decent older games won't use it as many devs will say dx9 or 11 is good enough and won't update em to dx12 and also say it costs too much or the engine is too much effort to update to it even tho i've read its not that hard to impliment it.

I'm surprised anybody is expecting devs to go back and update older games...

Yes, it will cost money and take time.

Apart from the odd one or two, I don't think you should be expecting anybody to go back and change existing games.
 
You need to look at the bigger picture, Microsoft is first and foremost an OS selling company, their past few OS have done not very well at all.

Windows XP was mourned when they ended support for it, Vista was a car crash, 7 is half decent, 8 was not even worth installing for so many that they had to release 8.1 to try and make it usuable.

Look what microsoft has recently announced, anyone with 7, 8 and 8.1 gets a free ugprade to 10 in its first year, this is because the influx of Apples OS, Linux based OS's and even Android, they have lost their dominance which showed by uptake from XP to Vista, and even XP to 7, and then all previous OS to 8.

They have already admitted the move is because of poor uptake of previous OS sales. Microsoft also realise that the PC market is not just office based PCs, PC gaming was historically a bit of a niche and geek thing, but now its a lot more common, component prices etc is a reflection of this. With the world turning to the internet for pretty much everything and platforms like Twitch creating mass revenue, its a no brainer that they are putting more effort into DX12.

I think Mantle opened the door for that in many respects, Mantle was the first low level API to show what could be achieved, Microsoft maybe didnt want to improve on DX11 for the PC and wanted it to be Xbox exclusive, but the fact that the Xboxone / PS4 are the closest consoles have yet come to PC's makes it perfect for them to port DX12 across, the hardware on both is almost identical.

So in one swoop they can revive flogging sales in their OS (as Dx12 is rumored to perhaps not be a win7 feature) and anyone with 8 is likely to jump ship to 10, the more people using 10 for them the better, and they come out looking like heroes for bringing DX12 to the gaming masses.

Devs are happy they get more leverage with the tools at hand to deliver truly brilliant products (well the decent ones do anyhow)

thats my theory on the whole affair.
 
So it's only mildly worse than Mantle this early on, that's certainly a good sign.

Hopefully with more work the two can be indistinguishable.

Interestingly these results seem to show a big loss in performance per watt for the GTX 980. Averaged across most current games, a 980 has ~200% the performance per watt of an R9 290X, but in these tests the system only consumed 20W less than a 290X (and was also a bit faster), so with DX12 is a 980's performance per watt only ~150%?
 
Nice find Skeeter and very interesting.

This has seriously shocked me:

As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12.

The 980 was %50 faster than the 290X.... Wow is all I can say to that :eek:

So, anyone care to elaborate on why that is? I have heard so many times that this is following in the footsteps of Mantle, it is Mantle etc, I would have thought that AMD would be much closer. I feel it is because nVidia have been working closely with Microsoft on this and thus far, have the better outcome. In time, AMD will get there as well and with this being Beta, it obviously needs some tweaking for both platforms to get the best results.
 
Nice find Skeeter and very interesting.

This has seriously shocked me:



The 980 was %50 faster than the 290X.... Wow is all I can say to that :eek:

So, anyone care to elaborate on why that is? I have heard so many times that this is following in the footsteps of Mantle, it is Mantle etc, I would have thought that AMD would be much closer. I feel it is because nVidia have been working closely with Microsoft on this and thus far, have the better outcome. In time, AMD will get there as well and with this being Beta, it obviously needs some tweaking for both platforms to get the best results.

It could be Nvidia working closely with Microsoft, but it could also be the Maxwell arch itself.

Anandtech mentions the test is still being finalised, so maybe Maxwell is particularly good at the workload in the test?

If I recall, Maxwell had huge improvements over Kepler in compute/hashing tasks (as was shown when the GTX 750 Ti matched AMD cards in cryptocurrency mining, in performance per watt that is, whereas Kepler was far inferior for that task).
 
It could be Nvidia working closely with Microsoft, but it could also be the Maxwell arch itself.

Anandtech mentions the test is still being finalised, so maybe Maxwell is particularly good at the workload in the test?

If I recall, Maxwell had huge improvements over Kepler in compute/hashing tasks (as was shown when the GTX 750 Ti matched AMD cards in cryptocurrency mining, in performance per watt that is, whereas Kepler was far inferior for that task).

Yer, could well be down to Maxwell's improvements and I don't have a clue. I seriously expected the 290X to fair much better than what is showing.



Even the 680 is 25% faster than the 285.
 
Yer, could well be down to Maxwell's improvements and I don't have a clue. I seriously expected the 290X to fair much better than what is showing.



Even the 680 is 25% faster than the 285.

Oh yeah, since the 680 is a fair bit faster too, maybe it also has something to do with Nvidia's DX11 wizardry (since they're ~300% faster than AMD in this test with DX11).

Maybe since DX12 isn't finished yet, some of Nvidia's DX11 optimisations carry over into their DX12 drivers used for this test?

That would certainly make some sense.
 
Nice find Skeeter and very interesting.

This has seriously shocked me:



The 980 was %50 faster than the 290X.... Wow is all I can say to that :eek:

So, anyone care to elaborate on why that is? I have heard so many times that this is following in the footsteps of Mantle, it is Mantle etc, I would have thought that AMD would be much closer. I feel it is because nVidia have been working closely with Microsoft on this and thus far, have the better outcome. In time, AMD will get there as well and with this being Beta, it obviously needs some tweaking for both platforms to get the best results.

Worth noting that the 980 is NVIDIA's latest generation GPU, whereas the 290x is one generation older. Though I'm also surprised the 980 is 50% faster, wouldn't have thought it would be more than 20-30%.
 
Oh yeah, since the 680 is a fair bit faster too, maybe it also has something to do with Nvidia's DX11 wizardry (since they're ~300% faster than AMD in this test with DX11).

Maybe since DX12 isn't finished yet, some of Nvidia's DX11 optimisations carry over into their DX12 drivers used for this test?

That would certainly make some sense.

I run 3 Titans and a 3930K at 4.4 but I am seriously struggling to get 3 cards to run at 99% in most modern games and it is clear that is because of DX11 being the bottleneck. If I ca expect to get all 3 GPUs running at 99% on DX12, colour me impressed :cool:

Good to see it in action anyways and the first games are touted as Chrimbo time, so hopefully that is true and hopefully some nice devs decide to add DX12 into some of the older games. That would be pretty awesome and deserved of some Kudos if they do.
 
Worth noting that the 980 is NVIDIA's latest generation GPU, whereas the 290x is one generation older. Though I'm also surprised the 980 is 50% faster, wouldn't have thought it would be more than 20-30%.

True Dave and a fair comparison would be the 390X against the 980 (when it is released) and I am sure that will be tested as well. I am seriously keen on an 8GB version of the 390X, so fingers crossed that it will have big AMD improvements and leave the 980 standing.

I wish I knew why the 980 was so far ahead though.
 
I run 3 Titans and a 3930K at 4.4 but I am seriously struggling to get 3 cards to run at 99% in most modern games and it is clear that is because of DX11 being the bottleneck. If I ca expect to get all 3 GPUs running at 99% on DX12, colour me impressed :cool:

Good to see it in action anyways and the first games are touted as Chrimbo time, so hopefully that is true and hopefully some nice devs decide to add DX12 into some of the older games. That would be pretty awesome and deserved of some Kudos if they do.

Wonder if we'll see a round of old-ish games getting 'Remastered' into DX12, much like we see HD remasters of old games at the moment.


True Dave and a fair comparison would be the 390X against the 980 (when it is released) and I am sure that will be tested as well. I am seriously keen on an 8GB version of the 390X, so fingers crossed that it will have big AMD improvements and leave the 980 standing.

I wish I knew why the 980 was so far ahead though.


I think it will be the 390X that'll be 8GB (and that it'll come much later) from what I've read.

As far as I gather, the first implementation of HBM is limited to 4GB, so the AMD card that's coming soon should be 4GB tops (just very fast 4GB), so I assume it'll be called the 380X
 
Last edited:
Back
Top Bottom