• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Pure Hair comes at a very low performance hit - Great stuff

Just double the lightsources for pc version, and you have a lot heavier game, which still almost looks the same. Some effects still come with very heavy performance hit.

Digital Foundry had good video about XB vs PC performance when both ran on XB detail levels.

https://www.youtube.com/watch?v=YHhPOvlnLGM

Starting at 2:50



Edit:

Here is actually very good video with differences between XB and PC versions.

https://www.youtube.com/watch?v=ZlkYpNyKCjM

I have to admit that most of the differences you really have to look for to notice difference. They are there, but you don't notice them unless you know what to look.

Its a bit sad when the worst of the consoles means you need a £150 graphics card to do a slightly better job or similar job. It makes me wonder if a PS4 version was launched now whether it would be better than a GTX960 or R9 380 level experience.

People are talking about getting a decent experiences on cards which are £250+ but if the R7 260X GDDR3 in the XBox One can do such wonders at 1080P with CPU cores design for netbooks and tablets,then as GTX960 or R9 380 should be able to run this very well!

Digital Foundry have tested this with equivalent settings to the console and found that the PC versions does run much worse even with a technically faster CPU too. Thats with the additional effects turned off.

A GTX960 should be at least 60% to 70% faster than an R7 260X GDDR5. The XBox One GPU only has access to GDDR3 and sure the SRAM does help but there are limitations to what can be done with it due to its size.
 
Last edited:
Ok I finally got round to giving this an albeit quick whirl and to be perfectly honest its running really great....to my surprise..what with reading all the doom and gloom on the forums. I havent looked at any framerates yet but I wanted to just get a look at the game and how it plays before looking at framerates, as I feel that sometimes that little number in the corner sometimes colours a persons judgement of how a game actually plays. What is everyone using to get framerates Fraps? or something else? I just want to be consistent with what others are using. Oh and I slammed everything up to max apart from sticking with FXAA. Shame theres no actual benchmark utility in there yet but there you go. I'm in work in about an hour but will try and post my results sometime tomorrow for info. I am guessing that Kaap has put settings into the thread for benching, so will follow those. I have really been pleasantly surprised with it so far...mind you I am only just over the first "Snow" level so maybe it gets worse later....I hope not.
:)

I used to run fraps auto on starting the pc but I don't bother anymore, Yesterday I turned it on after getting stutter in ROTTR and that's what I use, I don't turn Steams fps counter on either.
I've realised it's best to avoid counters unless the games visually suffering and I want to know what's happening.
 
From the Nvidia Gefore ROTTR Guide page:


http://www.geforce.com/whats-new/guides/rise-of-the-tomb-raider-graphics-and-performance-guide#rise-of-the-tomb-raider-purehair

PureHair

PureHair is Crystal Dynamics and Square Enix's hair rendering technology, which like our own HairWorks technology adds tens of thousands of hair strands to a character model. These hairs act realistically, swaying and moving in concert with character movement, and can be affected by water, wind and snow, and are lit and shaded in real-time by the scene.

In Rise of the Tomb Raider, up to 30,000 strands of hair are applied solely to Lara, with large groups of hairs controlled by master strands that dictate their movement and properties, preventing each individual hair strand from acting independently, and keeping physics calculation costs in check.

What a bunch of losers. They can't even acknowledge who created the PureHair (TressFX) Technology.
 
From the Nvidia Gefore ROTTR Guide page:


http://www.geforce.com/whats-new/guides/rise-of-the-tomb-raider-graphics-and-performance-guide#rise-of-the-tomb-raider-purehair



What a bunch of losers. They can't even acknowledge who created the PureHair (TressFX) Technology.

Well it does not help AMD PR is so crap,that they allow this to happen. They should have been out advertising their tech is in the game like Nvidia does at every chance. No let's complain about Gameworks, and then when we do similar effects lets actually make sure nobody knows we were involved.

At time it makes me wonder what the heck they are doing. Nvidia must be sniggering in that statement since ther probably know 99% of gamers don't even know its AMD tech,especially when AMD spent all that effort selling TressFX and then rebrand it to Purehair and then goes silence. People have more chance knowing what TressFX is but probably have no clue Purehair is AMD technology.

Such an awesome,coherent marketing strategy there. Another AMD own goal.
 
Last edited:
Digital Foundry have tested this with equivalent settings to the console and found that the PC versions does run much worse even with a technically faster CPU too. Thats with the additional effects turned off.

A GTX960 should be at least 60% to 70% faster than an R7 260X GDDR5. The XBox One GPU only has access to GDDR3 and sure the SRAM does help but there are limitations to what can be done with it due to its size.

Unfortunately that just shows how inefficient DX11 is for cpu threading. You literally need to bruteforce through all calculations on pc side.

About that 960, it's unfortunate that we didn't actually see it's true performance as it was locked to 30 fps in that test with better cpu. I bet without that 30 lock it would have been pretty close to that 60-70% you mentioned.
 
Unfortunately that just shows how inefficient DX11 is for cpu threading. You literally need to bruteforce through all calculations on pc side.

About that 960, it's unfortunate that we didn't actually see it's true performance as it was locked to 30 fps in that test with better cpu. I bet without that 30 lock it would have been pretty close to that 60-70% you mentioned.

DF tested it with console settings with a GTX750TI and it was significantly worse than the console version and Nvidia drivers are less single threaded bottlenecked too.

http://www.eurogamer.net/articles/digitalfoundry-2016-rise-of-the-tomb-raider-pc-face-off

The GTX960 will produce a bit more consistent experience,and you can bump up AF and cutscenes are better. Plus I think the slightly more consistent experience is because they used a decent CPU too,ie, a Core i7.

Plus when you consider the R9 390 and GTX950 probably are three times faster than a R7 260X GDDR5 and the GPU in the XBox One uses GDDR3 with the SRAM for some tasks,it really does not look good.

Also,it appears a DX12 might not appear since the dev supposedly did not see any performance improvements,and they have not used Async shaders like in the console version.
 
Also,it appears a DX12 might not appear since the dev supposedly did not see any performance improvements,and they have not used Async shaders like in the console version.

Was there actually any rumors this should have been DX12 tittle in the first place? And where did you heard dev saying DX12 didn't give any boost?
 
Was there actually any rumors this should have been DX12 tittle in the first place? And where did you heard dev saying DX12 didn't give any boost?

http://www.pcgameshardware.de/Rise-...451/Specials/Grafikkarten-Benchmarks-1184288/

https://translate.google.co.uk/tran...s/Grafikkarten-Benchmarks-1184288/&edit-text=

In an interview, Gary Snethen says in connection with the illumination used in Rise of the Tomb Raider also: "On the Xbox One and for Direct X 12 Async Compute is used with Direct X 11, the calculation is running synchronously on the other hand." Possibly comes later support the low-level API via patch.

http://www.computerbase.de/2016-01/rise-of-the-tomb-raider-benchmarks/2/

https://translate.google.co.uk/tran...-tomb-raider-benchmarks/2/&edit-text=&act=url

Even though Rise of the Tomb Raider already has a very good graphics, it is quite possible, according to Crystal Dynamics, that this is again improved. So it is currently examining whether there will be a patch for the new DirectX 12 API. Internally leads with the new interface already experiments, however, was able to achieve no improvement so far, so there is still no definitive statements in this regard.
 

Fair enough. Atleast they're experimenting with DX12, which is good news. Though I have hard time believing that they didn't find any improvements, especially in AMD's case, where most of top end cards are suffering from underutilization under DX11. But time will tell.
 
Fair enough. Atleast they're experimenting with DX12, which is good news. Though I have hard time believing that they didn't find any improvements, especially in AMD's case, where most of top end cards are suffering from underutilization under DX11. But time will tell.

Dx12 is not a magic solution that instantly makes things faster - you are still accessing the same hardware and depending on what you are doing the hardware will be the bottle neck. This is especially true as the resolution increases, shader and pixel filtrate become the defining limits.

DX11 drivers are very smart, they do a lt of optimization. When you move to DX12 a lot of those optimizations go away and it falls to the developer to do the same work. As we have witnessed with the ridiculous AOTS benchmark, developers are not always very good at making a DX12 path run faster than the Dx11 path.
 
Well I've not played it, but after watching a good few videos showing off the new hair technology, all I can say is, even though it is definitely much better than the standard hair, it still looks rubbish. There's Lara covered in muck and her hair always seems to be pristine, ok so it gets wet, but that is about it.

This in game hair tech still has a long way to go, unless actually playing it is better than the videos.
 
Dx12 is not a magic solution that instantly makes things faster - you are still accessing the same hardware and depending on what you are doing the hardware will be the bottle neck. This is especially true as the resolution increases, shader and pixel filtrate become the defining limits.

DX11 drivers are very smart, they do a lt of optimization. When you move to DX12 a lot of those optimizations go away and it falls to the developer to do the same work. As we have witnessed with the ridiculous AOTS benchmark, developers are not always very good at making a DX12 path run faster than the Dx11 path.

All Mantle and DX12 demo's/games show a decent benefit to Amd hardware at 1080p single card. So far Dx12 on Nvidia does not seem to give a boost but the dx12 sample is about 1 benchmark so not conclusive.
 
Well I've not played it, but after watching a good few videos showing off the new hair technology, all I can say is, even though it is definitely much better than the standard hair, it still looks rubbish. There's Lara covered in muck and her hair always seems to be pristine, ok so it gets wet, but that is about it.

This in game hair tech still has a long way to go, unless actually playing it is better than the videos.

I can say it does look better than the videos, as YouTube still compresses things horribly. Even then though, I completely agree about it needed better physics, and that muck, blood and other bits need to get into it.

At least snow gets stuck, but that does look pre-modeled. I'm just glad that the different between Off, On, and Very High is only 4fps for me.

I look forward to the future where hair and fur effects are not only better, but simply be part of the game without needing it as an extra option.
 
+1 ^^^^^ @ D.P

DX12 will help (a lot) where a lot of communication between the CPU and GPU is going on, one example of that would be streamed shadow casting lighting, to many instances of that in DX11 and the API can't deal with it, you end up with a lot of queuing which results in lower FPS, that problem can be extreme if as a developer you deign the lighting the way you want.
Voxel Based Global Illumination is another, Ray-Tracing is calculated A-Synchronously on the CPU with that data sent back to the GPU for render, the latency for that needs to be very low or you end up with constant trace reflection and shading refreshing, not good and limits whats possible in DX11 while DX12 is vastly more efficient for such a task.

If however your project employs non of the above DX11 or DX12 will make no difference.
 
Last edited:
Well I have been glued to this for the last 4 hours ish and it is simply the best looking game I have played. Dropped to SMAA from SSAA and sitting above 60fps and only occasional drops to 58ish fps at 1440P with everything else on max. Great game and very good thus far for gameplay. Eidos did a great job with another TR and performance is sweet all round.

Yeah it looks great, i tried the hair off as someone said it trashes the FPS but it only gave me a couple more FPS so i turned it back on again, might aswell have it looking nice. Wish i could have some better AA though, the FXAA does look ok but im so used to using MSAA now as it keeps everything so crisp looking. Grass always looks a bit fuzzy with FXAA.
 
I look forward to the future where hair and fur effects are not only better, but simply be part of the game without needing it as an extra option.


Although I know what you mean, just look at the graphics setting menu in just about any game, all those things are an extra option. This is the problem with having to run on a very varied arrangement of hardware, of course that is one problem the consoles don't have.
 
Back
Top Bottom