• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Resident Evil 7 Benchmarks

You're coming across as very misinformed.

You are calling me misinformed, ok then, answer this question.

If you are changing settings for a given set of hardware in order to obtain a certain amount of performance, what do you call this.

If you test something and put everything at max and ignore any apparent architecture advantage or weakness, what does this entail more of.

These 2 scenarios are very different and give very different results, what guru3D did was tweak settings as they felt the numbers seemed odd, in order to achieve this, they TWEAKED the settings away from Max or everything on.

If you are truly wanting a true performance benchmark, then you test everything on, ideally you also use the newest thing, you ignore what the hardware is good or bad at, if you change things to get the best performance out of a given hardware, that is literally called tweaking - Which is what Guru3D did.
 
'Shader' Cache is a recently added function to AMD drivers anyway, nobody complained and switched of the Shader Cache in Nvidia drivers in past benchmarking.. even if its different (didnt someone say its a similar thing?) to Shadow Cache.

I just think benchmarking is everything maxed, its a massive balls up that Guru3D have made here though.. in my opinion.

Need other benchmark review results and then get an overall impression of performance.

Some benchmarks here.. not sure on presets used.. EDIT.. presets on the pics, are they maxed out?

http://wccftech.com/resident-evil-7-biohazard-pc-performance/
 
You are calling me misinformed, ok then, answer this question.

If you are changing settings for a given set of hardware in order to obtain a certain amount of performance, what do you call this.

If you test something and put everything at max and ignore any apparent architecture advantage or weakness, what does this entail more of.

These 2 scenarios are very different and give very different results, what guru3D did was tweak settings as they felt the numbers seemed odd, in order to achieve this, they TWEAKED the settings away from Max or everything on.

If you are truly wanting a true performance benchmark, then you test everything on, ideally you also use the newest thing, you ignore what the hardware is good or bad at, if you change things to get the best performance out of a given hardware, that is literally called tweaking - Which is what Guru3D did.

Ok Ill try explain this better and please try to have an open mind about it because this is to help.
I'll use Vulkan Doom as an example. Yes you have a setting in the game which you can toggle to enable Vulkan or open GL but this setting is nothing to do which the actual visual settings. Behind closed doors, Vulkan is completely changing how the game is rendered compared to open GL. It merely takes advantage of features more prominent in AMD cards. Its not a trick or cheat or lowered graphics setting. In other words it is unlocking the full potential of the gpu just as dx11/open GL uses nvidia cards to max potential (open to debate). This is comparable to how different drivers give different results only at a much lower level. It differs in that this is a totally different code the game is written in not how the card processes the code like a driver.

Understand that nowadays there is no 1 definitive way to run games anymore. With these cards becoming so complex and different to each other it becomes sensible to give users more choice in how they wish the game to be rendered based on user hardware. Where you seem confused seems to be that we have to choose which platform to run through the settings but rest assure no graphics changes are being made, no tweaks, no nothing except how your gpu can render... with any settings, be it max/lowest/whatever.

As for this shader cache, to my knowledge, it simply allows cards to utilize more memory by storing unused textures for later use or something to this effect. If you don't have the memory it will clog up your vram and bog down performance. And that is with both AMD and Nvidia cards.
 
Hopefully a more credible site will benchmark the game and show the true performance of each card. Guru3D is now a proven shill so pointless to rely on it.





wow.. he sure has. When I read it earlier he told a guy to F... OFF when challenged. What a joker.

Here's the link to the discussion anyway.
http://forums.guru3d.com/showthread.php?t=412306&page=3

To be honest, although he shouldn't have lost his cool - I'd get fed up with people not reading the review properly, too.

It's a bit like people on here. All this exercise has proven is how little people pay attention on here especially...

We had people comparing the original graphs performance to the new one left right and center...

It's embarrassing.
 
Yeah it's one game bench from one site with questionable results.. hardly an authoritative consensus.

I've just taken delivery of an rx480, im half tempted to buy the game just to se for myself.
 
Ok Ill try explain this better and please try to have an open mind about it because this is to help.
I'll use Vulkan Doom as an example. Yes you have a setting in the game which you can toggle to enable Vulkan or open GL but this setting is nothing to do which the actual visual settings. Behind closed doors, Vulkan is completely changing how the game is rendered compared to open GL. It merely takes advantage of features more prominent in AMD cards. Its not a trick or cheat or lowered graphics setting. In other words it is unlocking the full potential of the gpu just as dx11/open GL uses nvidia cards to max potential (open to debate). This is comparable to how different drivers give different results only at a much lower level. It differs in that this is a totally different code the game is written in not how the card processes the code like a driver.

Understand that nowadays there is no 1 definitive way to run games anymore. With these cards becoming so complex and different to each other it becomes sensible to give users more choice in how they wish the game to be rendered based on user hardware. Where you seem confused seems to be that we have to choose which platform to run through the settings but rest assure no graphics changes are being made, no tweaks, no nothing except how your gpu can render... with any settings, be it max/lowest/whatever.

As for this shader cache, to my knowledge, it simply allows cards to utilize more memory by storing unused textures for later use or something to this effect. If you don't have the memory it will clog up your vram and bog down performance. And that is with both AMD and Nvidia cards.

I actually understand early on what you were getting at, what I am saying to you is, you seem to prefer changing things around for each card to get the best performance out of them which confirms my point of the tweaking.

You then produce an article based on tweaking settings to get the best out of the hardware, your pointless drivel is based on this yet thats not what a performance benchmark is suppose to be about.

This was suppose to stress test the hardware and see where it falters, you do not then change the benchmark based on this otherwise it no longer becomes a performance benchmark, you keep going, it keeps it consistent.

Guru3D here did not produce an article on how to get the best out of your hardware at first, they originally set out to see how the cards fared with everything on, the results went against whatever the **** they believe in.

Tweaked the settings and then presented officially their graphs with no comparison, Guru3D right now are saying this is the performance benchmark, they have put the update notes in a small line and have not changed the heading of the article.

They really need to be very open to what they have done in order the achieve their results on the graph, put it extremely open for all to read, if they want to produce an article that favours each vendor hardware specifics and the settings used for it, then fine. They have not done this, this was not the point of the article yet this is the premise you are defending.

Your average Joe will see Guru3D graphs and say this is the hierarchy at max settings, an extremely misleading set of graphs.

It is upto the user to configure their systems based on what they see, if this means toning it down or taking settings off then fine, the reviewer in a performance benchmark shouldn't be doing this.
 
Ok Ill try explain this better and please try to have an open mind about it because this is to help.
I'll use Vulkan Doom as an example. Yes you have a setting in the game which you can toggle to enable Vulkan or open GL but this setting is nothing to do which the actual visual settings. Behind closed doors, Vulkan is completely changing how the game is rendered compared to open GL. It merely takes advantage of features more prominent in AMD cards. Its not a trick or cheat or lowered graphics setting. In other words it is unlocking the full potential of the gpu just as dx11/open GL uses nvidia cards to max potential (open to debate). This is comparable to how different drivers give different results only at a much lower level. It differs in that this is a totally different code the game is written in not how the card processes the code like a driver.

Understand that nowadays there is no 1 definitive way to run games anymore. With these cards becoming so complex and different to each other it becomes sensible to give users more choice in how they wish the game to be rendered based on user hardware. Where you seem confused seems to be that we have to choose which platform to run through the settings but rest assure no graphics changes are being made, no tweaks, no nothing except how your gpu can render... with any settings, be it max/lowest/whatever.

As for this shader cache, to my knowledge, it simply allows cards to utilize more memory by storing unused textures for later use or something to this effect. If you don't have the memory it will clog up your vram and bog down performance. And that is with both AMD and Nvidia cards.

S C option is about pre-rendering all those shadow things , while you turn it off , it will be on real-time. Different GCN generations also benefit different way. Fast path code also implemented into the game. Latest AMD gens like it better. Not to mention the 8 gigs of vram.
 
Not disputing that the benchmarks are a bit sus but, That statement has blown my mind tbh :confused:

When they test cards should they remove Mem/cores/clocks/whatever to bring them in line to make it fair ?

I know I've refereed to hardware but surely the difference seen on AMD is due to hardware advantage or lack of on NV cards being such a big difference ?

When I do my own testing for my youtube channel, I test with everything maxed out (everything) but would that be fair to do the same for a 970 or a Fury X for example. I would be ripped a new one if I did that.

Here is me testing 20 games at 1440P on a GTX 1080 and look at the poor performance in some of the games. Quite shocking really!


So would it have been fair to do that with the above cards? The cache also gives no enhanced image (that I am aware of) and if it is detremental to some cards, it makes sense to turn it off no?
 
The cache also gives no enhanced image (that I am aware of) and if it is detremental to some cards, it makes sense to turn it off no?

The shader cache option has no impact on visual quality and looking at the on vs off comparison, the majority of both brands of cards gain FPS, only a few cards lose a few FPS:

Original GURU3D test (Shadow Cache = ON)...........................................Updated GURU3D test - 1080P (Shadow Cache = OFF)
inwimf.jpg
2r7vf2f.jpg

What I find even more interesting is how the 970 and 980 are excluded from the benchmarks with shader cache on...
 
The shader cache option has no impact on visual quality and looking at the on vs off comparison, the majority of both brands of cards gain FPS, only a few cards lose a few FPS:



What I find even more interesting is how the 970 and 980 are excluded from the benchmarks with shader cache on...

980 even slower with S C off compare to RX 480. 970 is behind RX 470(4).
 
I'd just look elsewhere for other resi evil 7 benches if you care that much, why keep worrying about a single dubious sites bench on a single game?
 
Last edited:
Graphical/game performance is all that matters, run each card with whatever tuning you like if it has zero effect on visuals or any other aspect other than FPS. These caches are irrelevant to these qualities from what I understand, so use whatever.
 
See, you cant have a reasoned debate/discusson if you are going to delete any comments that do not agree with your take on things. However, there's no need to call people nasty names or swear at them etc...But this seems to be ridiculously one sided on Guru 3D and in my opinion reduces his credability somewhat.

Another thing that sprang to my mind......

Within the last few months we have seen some review sites putting the DX11 results for Nvidia Cards up against the DX12 results of AMD cards....simply because that was the fastest FPS that either card got out of a game. Okay I am good with that.

However with this approach in mind why isn't this being done now with shader cache on or off to show the fastest FPS for each card with Res 7 ?

We must keep a level playing field and not move the goalposts when it suits us.....and indeed this is what it looks like is happening here. It seems it is okay to mix n match DX versions to get the fastest results per card but not with Shader/Shadow Cache......when it doesn't suit the green team.

Also if this is indeed a bug in Nvidia drivers which is not using shader caching correctly, when they do fix it and Nvidia cards do get a boost from it....that is when it will be okay to use it, yes?. Mark my words...same goes for Async Compute when Volta finally has it. ;)


Turning Shader Cache off reduces performance for both, its also not showing both in the best light, its showing both in the worst light, its just not showing nVidia as bad compared to AMD with that cache off because the 390X loses 48 FPS to the GTX 1080 5, with it on the 390X is faster than the 1080, with it off its not.

Thats the only reason they would do that, there is no other reason, there is certainly no reason why anyone would turn it off, none.
 
Last edited:
Graphical/game performance is all that matters, run each card with whatever tuning you like if it has zero effect on visuals or any other aspect other than FPS. These caches are irrelevant to these qualities from what I understand, so use whatever.

How can it be irrelevant if it gives a massive Performance improvement for some cards?

I think it's a rather silly not having two graphs really. One with SC On, and another with them Off.
That gives everyone the full picture on how their cards will perform in the game.

If I had an Hawaii based card there's no way in hell I'd run the game with SC Off. 390X drops from 140 FPS to 92 FPS if Shadow Cache is turned off.
 
Last edited:
When I do my own testing for my youtube channel, I test with everything maxed out (everything) but would that be fair to do the same for a 970 or a Fury X for example. I would be ripped a new one if I did that.

So would it have been fair to do that with the above cards? The cache also gives no enhanced image (that I am aware of) and if it is detremental to some cards, it makes sense to turn it off no?

There's a clear difference to singular testing and a group shoot out test.

For example, if you were to up a vid of 'RX480 8Gb v 1060 6Gb who's the fastest?'

You categorically would get ripped a new one(on this forum at least)if you refused to test both on/off SC scenarios(if needed) to show each cards maximum performance in one of your vids.

Wouldn't it only be fair at the end of the day, or should reviewers now be 'hiding' performance?
 
Last edited:
Guru was spot on with turning the Cache off in my opinion. It is fair to all cards when it is.

Sorry, I disagree.

When talking about settings with little or no visual impact, what people actually do is this:

  • if X hurts performance on some cards but has no effect on others, leave it off
  • if X boosts performance on some cards, but has no effect or is even detrimental for others, then you use the best option for your card

Example: DX11 games with DX12 ports. If I had a Maxwell card, I would run the game with DX11, as it give me the same visuals but higher FPS. Similarly, I would use DX12 with GCN cards, for the exact same reason. If I wanted to see a comparison, I would ideally see the mixed DX11/DX12 graph with the best result, or at least separate DX11/DX12 graphs.

Same for Doom Vulkan/OpenGL. Nobody cares how fast OpenGL is on AMD cards, because everyone will use Vulkan. No visual impact and better performance.
 
Back
Top Bottom