• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Yup, the effects are crap and pointless imo, they always get turned off first if I need FPS.

The idea of it is good but the way it is handled and gone about is awful not to mention that I still have yet to see a gameworks effect better than certain built in engine effects (aside from volumetric smoke and hair stuff)

Why would you feel forced to buy pascal if gameworks is a load of crap in your opinion? Just go Polaris and do without the effects.

Unfortunately this is the effect nvidia have on most people, can't beat them, join them as they say :o

Is ROTTR DX11? With MS being involved I would have thought they would have developed it with DX12. That explains the lack of Async Compute in the PC version I guess.

This was taken from one of the tech report sites on TR:

the actual pc version doesn't have any DX12 as of now, and while the Xbox did have AsyncCompute, the pc doesn't. However, they did contact Square Enix and they said, that the game will support DX12_1 features that as of now are only supported by nVidia Maxwell gpus.
 
Those clocks are still weak, highlighted even further considering your cards are fully blocked up and water cooled. The Fury range are poor overclockers, period.

I realize I'm on a forum called "Overclockers" and yet... If a CPU or GPU can reliably overclock to x% why shouldn't a company just release it clocked at that speed? Overclocking started as some odd enthusiast hobby and then when certain CPUs were found to sometimes be wildly over-clockable it became more of a thing with lots of people hoping to win on the 'silicon lottery'. And now it seems to have transitioned to this weird thing where everyone buys a chip and expects to reliably turn it up 20% over the official settings and is annoyed if they can't.

I'm not excusing some AMD rep calling it "an overclocker's dream" if it isn't (though 13% overclock seems good to me), I'm just not getting this idea of why CPU and GPU manufacturers should release products below what they're capable of so that customers can have the privilege of going through the hassle of getting back the withheld potential themselves. Getting those last drops of performance from chip when samples vary wildly - alright. But logically the more consistent a chip manufacturer is able to produce their product, the worse of an 'overlocker' it is going to be. It seems like condemning the vendor for improving. In short:

2004:
Manufacturer: "We can't consistently get this performance from this chip, we'll have to sell them running at around 70% of their potential."
Customer: "Yay! I clocked this chip up 500MHz and got something for free!"

2016:
Manufacturer: "We can consistently get our chips running in the top 5-8% of their potential which means people are getting the most out of these chips by default."
Customer: "Boo! Poor overclocker!" :mad:
 
There are loads of factors to not clock them higher: power, reliability (and no, 2 hours of stable gaming is not proof of stability), economics.
99% of OCers never care about power increase but it is there, and companies don't want to go over the specs of motherboards and other components.
 
There are loads of factors to not clock them higher: power, reliability (and no, 2 hours of stable gaming is not proof of stability), economics.
99% of OCers never care about power increase but it is there, and companies don't want to go over the specs of motherboards and other components.

That went out the window with the 295X2.:D

The real limit with GPUs is cooling which in most cases means a limit around 250watts give or take a bit.
 
This was taken from one of the tech report sites on TR:

Kind of sad only consoles get Async shaders for AMD cards as the XBox one version seems to perform pretty well considering the specs of the hardware.


Basically it looks the console version,is running at 1080p with mostly high settings. This is an XBox One which is basically using an R7 260X!

http://www.eurogamer.net/articles/digitalfoundry-2016-rise-of-the-tomb-raider-pc-face-off

Console equivalent settings take a long time for us to hammer down, and there may be some small differences, but generally speaking, you're getting a very, very close facsimile of how the game looks running on Xbox One. Identifying these settings is important - it shows us how the developer of the game has prioritised visual features when running on a box with a set level of GPU power. On the PC space, it's a good start in setting a base level of graphical quality, then customising to suit your hardware.

Resolution: 1920x1080 (though cut-scenes render at 1440x1080)
Texture quality: high
Anisotropic filtering: looks like 2x
Shadow quality: high (in terms of resolution but missing details)
Sun soft shadows: off
Ambient occlusion: BTAO (not available on PC)
Depth of field: very high
Level of detail: high
Tessellation: off (but adaptive tessellation is used for snow deformation)
Screen-space reflections: on
Dynamic foliage: medium
Bloom: on
Vignette blur: on
Motion blur: on
Purehair: on
Lens flares: on
Screen effects: on

First and foremost, the venerable DF budget PC with an i3 processor and GTX 750 Ti finally met its match with Rise of the Tomb Raider. Running with settings similar to Xbox One, we found that testing areas saw the game turn in frame-rates between 13 and 25fps. In comparison, the AMD test featuring an R7 360 fared even worse with performance numbers dropping all the way into the single digits at these settings.
 
Last edited:
Edit: Nevermind, you're on about shaders.

Yeah,I meant that and now have edited it to stop any confusion. It makes me wonder what the PS4 will be like since it has a far more power GPU.

The GTX960 seems to be able to do a better overall job than an XBox One,but it probably means you need something closer to a GTX950 to have a similar experience to an XBox One in the game. Considering the XBox one uses a downclocked version of the GPU in the R7 260X with GDDR3(instead of GDDR5 for the desktop version),the GTX750TI and R7 360 that were used in the test are probably more powerful.

Looks like another meh optimisation job for the PC. At least the Vidal Sassoon hair effects seem to have less of an impact on framerates now unlike the first one.
 
Last edited:
There are loads of factors to not clock them higher: power, reliability (and no, 2 hours of stable gaming is not proof of stability), economics.
99% of OCers never care about power increase but it is there, and companies don't want to go over the specs of motherboards and other components.

I don't think power is a factor and reliability is encompassed in what I'm saying by definition. The reason I don't think power is a factor is because I think people choose between equivalent priced cards primarily based on review benchmarks with power secondary. I don't think either NVIDIA or AMD would make the decision to underclock a card for reviews thinking "well, we'll place lower than our competitors on all the FPS charts, but they'll still buy us because we're reporting 5W lower than we would have oherwise".

The only place where I see overclocking capability as more than a marketing tool, is with DDR memory where the actual standard specifies a maximum and anything over it is, in technical terms, overclocking.
 
Kind of sad only consoles get Async shaders for AMD cards as the XBox one version seems to perform pretty well considering the specs of the hardware.


Basically it looks the console version,is running at 1080p with mostly high settings. This is an XBox One which is basically using an R7 260X!

http://www.eurogamer.net/articles/digitalfoundry-2016-rise-of-the-tomb-raider-pc-face-off

This DF guy doesn't seem to be too clued up on things at all. He thinks the games uses Async Compute (ROTTR is a DX11 game) and also mentions that if you have an Nvidia card you can use DSR to increase resolution (AMD has VSR). And to think we have people looking up to these guys for valid information.:confused:
 
Off topic I know but there is a few who do reviews who should know better or at least do the research first. I also refuse to watch any vids as I don't want any spoilers :D
 
This DF guy doesn't seem to be too clued up on things at all. He thinks the games uses Async Compute (ROTTR is a DX11 game) and also mentions that if you have an Nvidia card you can use DSR to increase resolution (AMD has VSR). And to think we have people looking up to these guys for valid information.:confused:

Its been covered in the computerbase.de article that the PC version does not use async shader but it also covered the use of tessellation in the game which matches with the settings the console versions appear to use.

Crystal Dynamics uses in Rise of the Tomb Raider in the snow presentation and some objects, such as trees and surfaces consisting of both the DirectX 11 tessellation technology. The optical effect of tessellation is small. When snow is the difference, if any, can be seen only with a magnifying glass. Although trees and some (few) surfaces consisting of both in fact get a little more depth and more realistic surfaces, however, the optical gain is low.

So,it seems we have another ****-poor PC port,and even on a GTX750TI which is more powerful and has better tessellation support than the R7 260X GDDR3 in the XBox One,with tessellation off and technically a faster CPU,we are having much worse performance on the PC.

But this is a telling statement from the article. Crystal Dynamics who made the game said async shaders helped improve performance quite a bit with consoles,but are saying now that DX12 does not improve performance much on cards - so I wonder if that is AMD or Nvidia ones?

Even though Rise of the Tomb Raider already has a very good graphics, it is quite possible, according to Crystal Dynamics, that this is again improved. So it is currently examining whether there will be a patch for the new DirectX 12 API. Internally leads with the new interface already experiments, however, was able to achieve no improvement so far, so there is still no definitive statements in this regard.

So,I get the impression no DX12 version of the game for a while.
 
Its been covered in the computerbase.de article that the PC version does not use async shader but it also covered the use of tessellation in the game which matches with the settings the console versions appear to use.



So,it seems we have another ****-poor PC port,and even on a GTX750TI which is more powerful and has better tessellation support than the R7 260X GDDR3 in the XBox One,with tessellation off and technically a faster CPU,we are having much worse performance on the PC.

But this is a telling statement from the article. Crystal Dynamics who made the game said async shaders helped improve performance quite a bit with consoles,but are saying now that DX12 does not improve performance much on cards - so I wonder if that is AMD or Nvidia ones?



So,I get the impression no DX12 version of the game for a while.

The DX12 debacle seems to be following a pattern. First we had the ARK survival devs claiming they had it ready and getting a 20% boost and then suddenly backed out of releasing it. No we have the TR devs saying it gave a good boost on consoles but suddenly the pc doesn't give any gains? The link between both appears to be that they are Nvidia sponsored games. Go figure.

Async has clearly given the Xbox One decent performance so why wouldn't the pc get the same benefit?
 
Last edited:
Yup, but with Nvidia paying off devs not to use advanced features, while paying devs to use stunted gameworks features with half of the significant gameworks using games being disasters... people still buy Nvidia thinking they are getting something better. Largely because they spend a huge amount in marketing. I really love people who buy something more expensive from a company who has spent the past decade holding gaming back at every possible stage.

DX10 got semi gutted because Nvidia couldn't support all the features(despite very early warning it was coming) so Microsoft magically removed all the features Nvidia didn't have.

A company that dared to support the original DX10 and showed what, 15-20% performance gain in Assassin's creed added DX10.1 support, showed Nvidia up then suddenly removed DX10.1 support, screwing gamers and reducing performance, because Nvidia says so. How people can support a company that craps on them at every opportunity I don't know.

Making a mistake is one thing, actively planning to screw over your own customers is pathetic.

Again I'll make the Crysis 2 point, don't focus on the fact that Nvidia attempted to hurt AMD with over tessellation of flat objects and hidden water.... they hurt NVIDIA performance by doing that. When can people focus in on that specific point. Nvidia customers who spent maybe £500 on a gpu, had their own performance reduced by what 15-20% just so Nvidia could reduce AMD performance by 30-35%. That isn't even as bad as screwing Kepler performance with people having spent up to what £800 on a Titan having their performance eroded over time and finding Nvidia pushing excessive tessellation to hurt AMD and older Nvidia performance.

Nvidia will screw Nvidia customers whenever it suits them, I can't believe Nvidia customers not only put up with it, but defend it and continue to support the company that actively holds everyone back.
 
The DX12 debacle seems to be following a pattern. First we had the ARK survival devs claiming they had it ready and getting a 20% boost and then suddenly backed out of releasing it. No we have the TR devs saying it gave a good boost on consoles but suddenly the pc doesn't give any gains? The link between both appears to be that they are Nvidia sponsored games. Go figure.

Async has clearly given the Xbox One decent performance so why wouldn't the pc get the same benefit?

Nvidia put on breaks.
its called fixing their broken hardware.

GCN rocks and rocks on improved with polaris the brighter future for us PC gamers.
 
from what ive seen ppl complaining about pretty small stuff in tombraider, its not so terrible, i wouldnt notice it so much, the smoke looks a little batman in places that i notice more than slight textures
tho i can understand ppl wanting a game that shows what a pc is really capable of, i dont think its that

where as ARK they should be ashamed how bad that looks lol
 
The DX12 debacle seems to be following a pattern. First we had the ARK survival devs claiming they had it ready and getting a 20% boost and then suddenly backed out of releasing it. No we have the TR devs saying it gave a good boost on consoles but suddenly the pc doesn't give any gains? The link between both appears to be that they are Nvidia sponsored games. Go figure.

Async has clearly given the Xbox One decent performance so why wouldn't the pc get the same benefit?

Maybe the PC is too powerful and can manage to do the same with DX11.

Just a guess but we have seen something along these lines in AOTS.
 
Why would you feel forced to buy pascal if gameworks is a load of crap in your opinion? Just go Polaris and do without the effects.

Obviously I don't wait to miss out on said effects. We've seen more and more Gameworks effects being added few months, so it's only going to get more widespread and popular at this point of time.
 
Back
Top Bottom