• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Resident Evil 7 Benchmarks

To me thats just semantics :) , be it in the drivers or the game settings they turned Shader Cache off and took down the review with Shader Cache on which had the 390X faster than the GTX 1070.

No one with an RX 480 or R9 390X is going to run the game like that, especially knowing it reduces the performance a lot, its just plain fake.

That is not semantics at all. Not even slightly. What the game is doing when shadow cache is turned off is up in the air...

Shader Cache stores recently used shaders for reuse, but in small quantities.

They're both completely separate...
 
You are not daft and do you honestly believe that the original bench test was correct? The guy shows what happens when using shader cache off and on with his Fury X (4GB)

The FuryX obviously benefits from turning off the cache just like the other 4gb cards like the 980, etc.

The issue is that the 480 and 390X are losing a huge amount of performance by turning it off. It looks suspiciously like Guru3D chose to do that to put AMD cards 'back in their place' since he couldn't fathom the fact that a 390X was around the same performance as a 1080.

All he had to do was provide a seperate benchmark for the 6GB+ cards and leave the 4gb and below cards without shadow cache.

In 2017 you'd expect the reviewers to show us the uncompromised performance of the 8GB cards, not tweak settings so the 4GB cards can catch up.
 
The FuryX obviously benefits from turning off the cache just like the other 4gb cards like the 980, etc.

The issue is that the 480 and 390X are losing a huge amount of performance by turning it off. It looks suspiciously like Guru3D chose to do that to put AMD cards 'back in their place' since he couldn't fathom the fact that a 390X was around the same performance as a 1080.

All he had to do was provide a seperate benchmark for the 6GB+ cards and leave the 4gb and below cards without shadow cache.

In 2017 you'd expect the reviewers to show us the uncompromised performance of the 8GB cards, not tweak settings so the 4GB cards can catch up.

Right. The GTX 1080 lost 5 FPS with the cache off, the R9 390X lost 48 FPS.

No one would turn that off. so Guru turning that off is a fake act for a fake review.
 
Agreed but I think they choose to lose sight to suit agendas.

Agree with you there Greg. However, I think what a few people are getting into a rant about is the fact that some reviewers chose to change the set of results that they posted (Which indeed did seem to correct the order of performance in which these graphics cards would be expected to run).

Why, after discovering that AMD cards with greater than 4GB ram got greater performance boosts (and therefore altering the look of the performance table) would you not leave those results in situ with the other results for comparison and an explanation of the Shader Cache effect on cards with greater than 4GB Ram.

It is an agenda as you rightly said. But there can only be one agenda in this instance.....the agenda here would have been to not show some of Nvidia's new cards being beaten by older AMD cards in this game.

Both results should have been published with a discussion on Shader Cache and it's effects on cards with different memory capacities, which is interesting enough in its own right.

Simple
:)
 
Not sure how to say this but here goes.

For example, I have loads of different GPU's including a 960, a 970, a Fury, a Fury X, a 1080, a 980Ti, a Titan X, a 390X, a (you get the picture). I decide to do a bench test of these cards in Resdient Evil 7. I bench what I assume to be fair and just to all cards but the results look odd on my first post, so I investigate and realise that a setting isn't showing a fair reflection on the majority of my cards - Do I bench only the cards that it is fair on? Or do I bench all the cards with the detrimental setting turned off and re-post my results?

It isn't a conspiracy, it is the way of benching.

Edit:

One thing I have seen is no matter how you bench, someone wants it done differently as they deem it unfair. I have my channel and lots of conflicting feedback from my subscribers.
 
Guru make it abundantly clear where the performance penalties are, and advise to turn the setting off if using a card sporting 4GB or less with no visual penalty.

Making claims of slander with nothing substantial to back it up is all we have here.
 
I thought maxed Ultra settings was 'all the bells and whistles turned ON'?

I'll just keep my thoughts to myself though, surely you guys can work out what i am thinking about these benches ;)
 
So maybe it is the same thing but sad that some reviewers disabled it to show Nvidia in a better light even though it is a feature that works on both vendors.

It wasn't just that though, It was an ingame setting and turning it off helped the 4gb Fiji cards too. At the end of the day high end cards should have the ram to not need it turned off while lower cards that only had 4gb's or less along with the Fiji cards did need it turned off,

The only suspect thing I see is the 980 not even being in the original testing but then they are in it once the cause of the performance dump was ascertained. That makes me question why it wasn't included in the first graphs and whether it was down to how it did with the cache on.
 
Last edited:
Guru was spot on with turning the Cache off in my opinion. It is fair to all cards when it is.

Then its not a benchmark then, how do you define a benchmark, the poster below is spot on.

As I've said before, as the setting is not vendor supplied, it should be used on as the official benchmark but of course providing an alternative benchmark with it off so people can see what its like when playing on high settings and not max settings.

Shadow cache on is max settings, with it off is only high.
 
Not sure how to say this but here goes.

For example, I have loads of different GPU's including a 960, a 970, a Fury, a Fury X, a 1080, a 980Ti, a Titan X, a 390X, a (you get the picture). I decide to do a bench test of these cards in Resdient Evil 7. I bench what I assume to be fair and just to all cards but the results look odd on my first post, so I investigate and realise that a setting isn't showing a fair reflection on the majority of my cards - Do I bench only the cards that it is fair on? Or do I bench all the cards with the detrimental setting turned off and re-post my results?

It isn't a conspiracy, it is the way of benching.

At 1080p the only cards to lose performance from what was posted up is the Fury cards. All other cards gained or stayed the same. Do you then gimp the other cards by turning off said setting. The Shadow cache is enhancing the performance at 1080p and with it off unfairly heavily gimping 390x/390/480 and RX470 4gb. The gtx960 2gb even loses around 10% performance. All switching it off did was change the order around and make AMD's card bar Fury X slower than they are.

My point is it's not really being detrimental to any cards at 1080p bar the Fury's from the Guru comparison. I am only basing this on the 1080p results as that's all i can see that's posted up on this site.
 
Cache on or off still does not explain the 20fps difference between the 390X and 390P, these 2 cards are almost identical.
 
It wasn't just that though, It was an ingame setting and turning it off helped the 4gb Fiji cards too. At the end of the day high end cards should have the ram to not need it turned off while lower cards that only had 4gb's or less along with the Fiji cards did need it turned off,

The only suspect thing I see is the 980 not even being in the original testing but then they are in it once the cause of the performance dump was ascertained. That makes me question why it wasn't included and how bad did it do with the cache on.

Look at the 1080p chart again and you will see it's not detrimental to the performance of even 2gb cards like the gtx960. Every single nvidia card had higher performance or stayed the same. The Fury's were the only cards to gain performance with it turned off.


Cache on or off still does not explain the 20fps difference between the 390X and 390P, these 2 cards are almost identical.

At 1080p there is only a 14fps difference with the cache turned on. That's a difference off 11% which is what it should be.
 
Last edited:
Guru make it abundantly clear where the performance penalties are, and advise to turn the setting off if using a card sporting 4GB or less with no visual penalty.

Making claims of slander with nothing substantial to back it up is all we have here.

This is simply giving advice to the type of cards people own, a benchmark shouldn't pander to this, a PERFORMANCE benchmark is about STRESS TESTING hardware, to do this, you bring the hardware to its knees.

It is then upto the user if they can use the setting or not, to omit a non vendor supplied setting in their article is pandering to the mindshare and does come with the negatives of upsetting a certain vendor.
 
Not sure how to say this but here goes.

For example, I have loads of different GPU's including a 960, a 970, a Fury, a Fury X, a 1080, a 980Ti, a Titan X, a 390X, a (you get the picture). I decide to do a bench test of these cards in Resdient Evil 7. I bench what I assume to be fair and just to all cards but the results look odd on my first post, so I investigate and realise that a setting isn't showing a fair reflection on the majority of my cards - Do I bench only the cards that it is fair on? Or do I bench all the cards with the detrimental setting turned off and re-post my results?

It isn't a conspiracy, it is the way of benching.

Edit:

One thing I have seen is no matter how you bench, someone wants it done differently as they deem it unfair. I have my channel and lots of conflicting feedback from my subscribers.

Yes, and what Guru did was use settings that reduced the 390X performance by 48 FPS and the GTX 1080 by only 5.

No 390X or GTX 1080 user will turn that cache off because with both it reduces performance for no reason, so there is no reason for Guru3D to do it, the only effect it has is to reduce the performance on the 390X a lot more than it does on the GTX 1080.

So that is the only reason they would do it.
 
Last edited:
Look at the 1080p chart again and you will see it's not detrimental to the performance of even 2gb cards like the gtx960. Every single nvidia card had higher performance or stayed the same. The Fury's were the only cards to gain performance with it turned off.

The FuryX only gains 8 fps with Shadow cache off which is within the margin of error judging by the Nvidia 1080 result across both tests.

They also added the 980 to the list in the update which makes it even more suspicious. It looks like the initial test made the 980 much slower than a 470/390 so they omitted it altogether but added it when they gimped the AMD cards so they were worse than the 980/1060.

The main factor appears to be the huge performance loss for the AMD 8gb cards which is why GURU3D 'updated' his review. What a sad site.
 
I know its a hard thing to swallow for someone like Guru3D to behave in this way but the thing is they have staff to pay and their income is not from advertising, its from what they get from the vendors they review, and you don't bite the hand which feeds you, if the vendor doesn't like what they see and send you instructions, you do it.
 
Back
Top Bottom