• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

DirectX 12 support for Rise of the Tomb Raider now available

Somebody on another site saw a 30% improvement average, but they were running an AMD Phenom II(with a GTX970), so that's probably not surprising.

Seems like for the time being, DX12 isn't worthwhile unless you're running a lesser CPU setup.

Yes. Mantle, Vulcan & DX12 are designed for lowering the CPU overheads and letting the GPU do the work. They do not have much, if any, improvements on visuals.

Hence all those AMD FX8350s & A10 APUs now are coming to light when used along side powerful GPUs with DX12/Mantle/Vulcan.



To give you an example why DX12/Mantle/Vulcan are good
TW Attila at 4K with GTX980Ti an AMD A10 Kaveri APU is as fast as a 6700K with the same GPU and same settings. Because at that resolution the GPU is the bottleneck not the CPU.

While on WOT, a DX9 game single core game, regardless if I have GTX780Ghz, 290X @1100 (295X2 1 core since doesn't support CF), or a R9 Nano/FuryX @1075 the fps hovers between 75-84 at 2560x 1440 everything maxed out.
Because the common denominator is the CPU. A 4820K @ 4.953Ghz.
If I lower the overclock the fps tanks regardless the GPU.

I used also an FX8350 with that game, alongside the GTX780Ghz and barely scrape the 70fps at 1080p, with lower settings. Because it runs on single core and is CPU bound because of the DirectX.
 
Yes. Mantle, Vulcan & DX12 are designed for lowering the CPU overheads and letting the GPU do the work. They do not have much, if any, improvements on visuals.

Hence all those AMD FX8350s & A10 APUs now are coming to light when used along side powerful GPUs with DX12/Mantle/Vulcan.



To give you an example why DX12/Mantle/Vulcan are good
TW Attila at 4K with GTX980Ti an AMD A10 Kaveri APU is as fast as a 6700K with the same GPU and same settings. Because at that resolution the GPU is the bottleneck not the CPU.

While on WOT, a DX9 game single core game, regardless if I have GTX780Ghz, 290X @1100 (295X2 1 core since doesn't support CF), or a R9 Nano/FuryX @1075 the fps hovers between 75-84 at 2560x 1440 everything maxed out.
Because the common denominator is the CPU. A 4820K @ 4.953Ghz.
If I lower the overclock the fps tanks regardless the GPU.

I used also an FX8350 with that game, alongside the GTX780Ghz and barely scrape the 70fps at 1080p, with lower settings. Because it runs on single core and is CPU bound because of the DirectX.
Oh yea, well aware of all this.

Right now, DX12 is doing a basic form of what Mantle did.

I do think DX12 and Vulkan are a lot more comprehensive than Mantle was though and have more potential assuming a game and its engine is actually built with it in mind in the first place. This is where I expect to see the bigger changes. Not necessarily just performance increases, but actually taking use of the extra draw calls and whatnot to render more detail into scenes or put more AI on the field or have a more complicated agent-based simulation or basically anything that usually involves CPU-limited tasking. Obviously many games will still be limited by console limitations for multiplatform development, which is more common than ever in order to justify the costs of high-quality productions, but there's still a higher baseline shared with consoles in terms of things like draw calls that can improve games in general.

And there will be additional graphics capabilities once developers get a grasp on them. It's yet to be seen what the up and downsides are of using them, and thus no guarantee they all become ubiquitous, but more often than not, this potential will get utilized by many over time.

We're really just in the baby steps of all this. It's gonna take time before we really start seeing the true advantages.
 
I must have been dreaming. I could have sworn blind that several people had claimed that NVidia had paid to remove DX12 from this game ;)

I read those posts as claiming that Nv paid to remove DX12 Async from the game.

Edit, just ran the BM, is Async implemented, is that why it's slower than DX11?

Perhaps Nv only paid for delayed DX12 support...















I'll get my coat:p
 
Last edited:
Wait, so the API that was supposed to lessen the load on less powerful CPUs is CPU dependent and therefore makes things runs worse on older CPUs?:p

I really hope DX12 with Async is the miracle AMD fanboys like that Dave-something bloke were blabbering about and that a 390 will reach 980Ti/Fury X levels of perfomance (not my words) because so far it seems that every game runs worse with it...
 
Interesting VRAM / RAM usage

RoTR%20VRAM%20fix.jpg



RoTR%20RAM%20Fix.jpg
[/QUOTE]

http://techbuyersguru.com/first-look-dx12-performance-rise-tomb-raider?page=1
 
The other two DX12 titles the 390 is up there with the 980Ti, think it only runs worse on the one with Nv backing.:p

Really? My bad, then. I've heard very mixed opinions about the performance of all those games in DX12 so I guess I just assumed stuff. Well, that makes the 980Ti obsolete then:D
 
Down 1.5fps for me but is within the margin of error. Maybe higher for older/slower CPUs?


Fantastic comparison to see the differences between DirectX 11 and 12 I noticed interesting differences between both.

1:03 DX11 has 2 lines flicked while DX12 has no issue and noticed better image quality.

1:25 DX12 vegetables textures got no pop-in issue and have len flares while DX11 have vegetables pop-in issue but no len flares and flicked waterfall in DX11, DX12 dont have this issue on waterfall.
 
There might be something wrong with the Benchmark or The DX12 implementation. But people have been getting better FPS when playing ingame compared to what the benchmark tells you.
 
Back
Top Bottom