• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Resident Evil 7 Benchmarks

I am curious if any of you who fight so hard over these results have actually played the game even once :)

I haven't played the game. I will tell you this however: based on the graphs I saw, if my RX480-8GB gets 128 FPS with the shader cache on, I sure as hell am not going to go and turn it off, restricting it to 91 FPS! Why would I do that?

And if anyone asks me what sort of performance to expect from an 8GB RX480, I will tell them to expect about 128 FPS!
 
Last edited:
Like i said, max is max.

He should have just done all cards with the cache ON and then all again with the cache OFF, then graphed them accordingly, with a page explaining about Shadow Cache, details of its use and how it affects VRAM and performance.

Pretty bloody simple really :)

Actually, that's exactly what he did! He did perform the benchmark twice and did collect results for SC on at first, and then for SC off.

Only instead of presenting both results side by side and explaining that "as you can see, this setting gives a huge boost if you turn it on for AMD cards that have more than 4GB RAM", he came back and said "here are the proper results; you need to turn that SC thingy off, especially if you don't have more than 4GB of RAM", which is just misleading...
 
Last edited:
Not disputing that the benchmarks are a bit sus but, That statement has blown my mind tbh :confused:

When they test cards should they remove Mem/cores/clocks/whatever to bring them in line to make it fair ?

I know I've refereed to hardware but surely the difference seen on AMD is due to hardware advantage or lack of on NV cards being such a big difference ?

It would be interesting if the developers of the game explained exactly what this does.

I don't think it's just memory.

If it was just a matter of "this game uses 4GB or VRAM, and anything above that is used for shader cache, so the more you have to spare the faster it runs" then Nvidia cards with 8GB or more would also see a nice boost. Looking at the graphs above though, the 1070/1080 have a difference of just 4 or 5 FPS!

I suspect that there may be some pre-rendering going on using async shaders or something on AMD cards. So it's also a matter of being able to use the extra VRAM, which apparently GCN can do whereas Pascal/Maxwell cannot.
 
Actually, that's exactly what he did! Only instead of presenting both results side by side and explaining that "you can get a huge boost if you turn this on AMD cards with more than 4GB RAM" he said "here are the proper result; you need to turn that SC thingy off, especially if you don't have more than 4GB of RAM", which is just misleading...

The more I compare the old and updated benchmarks, the more certain I am that GURU3D intentionally left out the 980 and 970 from the initial benchmark because they were getting destroyed by the RX470.

The update results show virtually the same fps for all the Nvidia cards yet the AMD cards lose significant performance so the only reason he re-tested with shadow cache off was to bring the AMD cards down.
 
Turning Shader Cache off reduces performance for both, its also not showing both in the best light, its showing both in the worst light, its just not showing nVidia as bad compared to AMD with that cache off because the 390X loses 48 FPS to the GTX 1080 5, with it on the 390X is faster than the 1080, with it off its not.

Thats the only reason they would do that, there is no other reason, there is certainly no reason why anyone would turn it off, none.

So I take it you have done your own thorough independent testing to come to this conclusion :confused:
 
The more I compare the old and updated benchmarks, the more certain I am that GURU3D intentionally left out the 980 and 970 from the initial benchmark because they were getting destroyed by the RX470.

The update results show virtually the same fps for all the Nvidia cards yet the AMD cards lose significant performance so the only reason he re-tested with shadow cache off was to bring the AMD cards down.

*conspiracy hat ON*
you think it's a coincidence that there is virtualy no other benchs of the game out yet, someone is putting a halt on it untill they find a fix, then regular sites will get the GREENlight to post benchs :D
even guru got arm twisted to switch benchs to shadow cache OFF, although it doesnt make any sense, just because the FPS gains from AMD are too high compared to the competition :D
 
So I take it you have done your own thorough independent testing to come to that conclusion? :p

Dont own the game, only have one card, so I rely on people who do have multiple cards to do the testing.

I'm not the one throwing around accusations or claiming my opinion as fact now am I :confused:
 
Last edited:
Sorry, I disagree.

When talking about settings with little or no visual impact, what people actually do is this:

  • if X hurts performance on some cards but has no effect on others, leave it off
  • if X boosts performance on some cards, but has no effect or is even detrimental for others, then you use the best option for your card

Example: DX11 games with DX12 ports. If I had a Maxwell card, I would run the game with DX11, as it give me the same visuals but higher FPS. Similarly, I would use DX12 with GCN cards, for the exact same reason. If I wanted to see a comparison, I would ideally see the mixed DX11/DX12 graph with the best result, or at least separate DX11/DX12 graphs.

Same for Doom Vulkan/OpenGL. Nobody cares how fast OpenGL is on AMD cards, because everyone will use Vulkan. No visual impact and better performance.

Yep. The aim isn't as some have stated "to make things hard for the card" - if you've set the image quality stuff the same on each card then the test is fair. If one card can give you better performance with that image quality through a different render path then great - use that one. Why assume one or the other is better. Be it a DX11/12 change or a caching change or anything of that type.

...
The cache also gives no enhanced image (that I am aware of) and if it is detremental to some cards, it makes sense to turn it off no?

Only if you want those cards to shine when in real use they would not. If most cards run better with a setting on and this setting has no impact on image quality either way why on earth would turning it off for everyone make sense? Yes, image quality settings need to be the same between cards - but cache management?

Given your logic so far maybe for ultimate fairness you should use the same driver for all cards. After all changing it could cause weird results. It'll be tough on the vendor who didn't get to supply a driver for their own cards and got 0 as the rivals driver didn't run the game, but at least it's fair.
 
I'll have to mess with the settings as 4k maxed on my 980 Ti causes it to freeze up in some points using all Vram and 12gb ram titan x pascal uses 11gb Vram on dude random YouTube account
 
The cache also gives no enhanced image (that I am aware of) and if it is detremental to some cards, it makes sense to turn it off no?

This what I always say about "Depth of Field" and "Motion Blur". They do the opposite of enhancing the image and cost fps to do it! They are always the first things I turn off. The difference can sometimes be between having a 1070 and 1080 and yet people who want to say and feel like they are maxing games leave it on. Lol.
 
I actually understand early on what you were getting at, what I am saying to you is, you seem to prefer changing things around for each card to get the best performance out of them which confirms my point of the tweaking.

You then produce an article based on tweaking settings to get the best out of the hardware, your pointless drivel is based on this yet thats not what a performance benchmark is suppose to be about.=

It's not a performance review unless they show all of tests. It's just deliberately holding back cards.
So by your logic Guru3D should leave the "setting" off, because off means it's adding something to stress the gpus more right? Even though it seems more stressful to some of the cards but less stressful to others. And then we just pick the ones with the lowest AMD scores right?
 
Why are people still confused about this? Are they really that stupid?

So I take it you have done your own thorough independent testing to come to this conclusion :confused:

lol, don't be silly. They can't even get the functions name right. What's the point in reading the vitriol, if the very thing they're auguring about isn't the thing they're arguing about? :D

These are the facts (yes ladies and gentlemen, the facts);

1) It's made clear in the conclusion why one would want to turn shadow cache off

2) The conclusion mentions interlacing being left on for the original test; ergo the shadow cache comparison to the original graph in this thread OP, is useless. It tells you nothing you want to know (unless the results in that particular graph are all that matters to you)

3) The conclusion mentions that he plans on revisiting testing in a few weeks to try and make things more objective.

We've had people swinging slanderous claims at him, when truth be told all of this is purely sloppy journalism.

We've also got people who are trying to object on some kind of irrational principle level; nobody can comment on whether shadow cache should be left on or off. That's because (besides the fact they don't know what they're talking about), nobody in here is certain on what this setting does.

There are obviously toggles in certain games, whereby turning them to 'on' actually isn't necessarily in your interest...You only have to look back at previous tessellation arguments. Or for instance dynamic resolution scaling.


There is literally nothing to see here, besides a site that could have quite easily not rushed this piece out and done things a little more objectively from the beginning. Couple that with the fact nobody in here can apparently read properly, and you've got a bunch of misinformed idiots. That's all, folks.
 
Last edited:
At 1080p there is only a 14fps difference with the cache turned on. That's a difference off 11% which is what it should be.

On the original graphs post (which have now been deleted) there was a 20fps difference between the 390X and 390P (97fps v 77fps). This made the 390X more than 25% faster which is totally unrealistic.

I also think that Guru3D have made a total mess of the testing and we should be looking elsewhere for an independent source of performance figures.:)
 
Guru3D has omitted the shadow cache on benchmark here, if he wanted to be neutral or fair, he should provide both results side by side.

Unless there was some other reason why the first results were wrong that's what should have happened, They should have kept the graphs with everything on and then added a set of graphs showing just the cards that get a boost from turning the shader cache off along with an explanation of why the setting affected those cards so much.
After all when it came to Rise of the Tomb Raider they didn't drop the texture setting so that the performance on the Fiji cards wouldn't fall apart due to the memory usage did they.
 
Last edited:
...
I also think that Guru3D have made a total mess of the testing and we should be looking elsewhere for an independent source of performance figures.:)

Absolutely. Although I'm disparaging their technique at this point it's not about their results really, for those we'd be better looking elsewhere (and there are plenty other places with results now)

Edited as a word filter was picking a substring that wasn't really a rude word :/
 
What's funny is that there are only 2 people who are defending guru3d and agree about what they have done on here, those 2 must be on steam chat having a good laugh/troll again... ;) :D :p
 
The more I compare the old and updated benchmarks, the more certain I am that GURU3D intentionally left out the 980 and 970 from the initial benchmark because they were getting destroyed by the RX470.

The update results show virtually the same fps for all the Nvidia cards yet the AMD cards lose significant performance so the only reason he re-tested with shadow cache off was to bring the AMD cards down.

Really wierd that, however nice boost for those RX480 owners(8GB version) where you actually see a nice big boost in FPS with that extra ram, my old saying "can never have enough memory" coming true here, and those saying 4GB is all you'll ever need should think twice.
 
This what I always say about "Depth of Field" and "Motion Blur". They do the opposite of enhancing the image and cost fps to do it! They are always the first things I turn off. The difference can sometimes be between having a 1070 and 1080 and yet people who want to say and feel like they are maxing games leave it on. Lol.

I always keep depth of field on but turn motion blur off and Chromatic aberration off. I feel both ruin the IQ.
 
Back
Top Bottom