• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Resident Evil 7 Benchmarks

The main factor appears to be the huge performance loss for the AMD 8gb cards which is why GURU3D 'updated' his review. What a sad site.

Not to mention the lemon in the room. The RX470 they tested which only has 4gb Vram. It lost a full 35% of it's performance from turning off Shader Cache.
 
Last edited:
To me it is plain as the nose on your face.

A table of results where the shader cache is on for all cards above 4GB of Ram and Off for all cards below 4GB of Ram.

Unfortunately this table would indeed show AMD in a better light overall and that is the part that a few of you simply cannot stomach.

This Shader Cache feature is available on both sets of cards and is not a proprietry tech to either.

Greg to say that the review was "Fair" with all cards turning off shader cache then that totally shatters the fence you swing both ways from. Come on fella admit it, AMD are using the shader cache feature much better than Nvidia.

The thing is, this shouldnt be about leaving out benchmarks to suit one set of cards over the other. It should be about the effects of Shader Cache and Graphics Card Memory capacity as that is what has been discovered here.

The sheer fact that the RX480 loses well over 40 FPS is incredible in this instance and if this was the 1060 we would have seen a totally different reaction from Nvidia users in this thread.

:eek:
 
To me it is plain as the nose on your face.

A table of results where the shader cache is on for all cards above 4GB of Ram and Off for all cards below 4GB of Ram.

Unfortunately this table would indeed show AMD in a better light overall and that is the part that a few of you simply cannot stomach.

This Shader Cache feature is available on both sets of cards and is not a proprietry tech to either.

Greg to say that the review was "Fair" with all cards turning off shader cache then that totally shatters the fence you swing both ways from. Come on fella admit it, AMD are using the shader cache feature much better than Nvidia.

The thing is, this shouldnt be about leaving out benchmarks to suit one set of cards over the other. It should be about the effects of Shader Cache and Graphics Card Memory capacity as that is what has been discovered here.

The sheer fact that the RX480 loses well over 40 FPS is incredible in this instance and if this was the 1060 we would have seen a totally different reaction from Nvidia users in this thread.

:eek:

Everything what he said ^ :p
 
As long as this setting doesn't effect visuals all they have to do us provide the best setting for all the individual cards. It's like when they compare only dx 11 and then only dx12 with the 480 vs 1060 where the 480 gained performance and 1060 lost some. Just test what works best dammit!
 
So actual users would run with it turned on given it gives better performance so original results still stand? ...
Unless you run into V-Ram issues the default should be as it is, turned on, if on AMD you will see your card perform way above its tier.

If on nVidia you will also see a performance uplift.
 
Amd get in the lead on DX11 for the first time in, I can't remember, and it all goes Pete Tong...:p

But what a lead, R9 390X leading a GTX 1080 @ 1440P.

If this Shader Cache thing becomes more commonplace there are going to be a lot of grinning AMD GPU owners around. :D

Thier 290's get yet another boost for yet another lease of good life.
 
But what a lead, R9 390X leading a GTX 1080 @ 1440P.

If this Shader Cache thing becomes more commonplace there are going to be a lot of grinning AMD GPU owners around. :D

I didn't listen when they said 4gb wasnt enough :(

Ah well cache off the Fury still faster the 980ti. Would have been cool if it had been as fast as titan XP though hehe
 
Guru was spot on with turning the Cache off in my opinion. It is fair to all cards when it is.

Not disputing that the benchmarks are a bit sus but, That statement has blown my mind tbh :confused:

When they test cards should they remove Mem/cores/clocks/whatever to bring them in line to make it fair ?

I know I've refereed to hardware but surely the difference seen on AMD is due to hardware advantage or lack of on NV cards being such a big difference ?
 
Not disputing that the benchmarks are a bit sus but, That statement has blown my mind tbh :confused:

When they test cards should they remove Mem/cores/clocks/whatever to bring them in line to make it fair ?

I know I've refereed to hardware but surely the difference seen on AMD is due to hardware advantage or lack of on NV cards being such a big difference ?

It's best not to challenge anything he says :p
 
So actual users would run with it turned on given it gives better performance so original results still stand? ...

Of course you would....wouldn't you?

In my case (If I get the game) I will turn it off as my Fury has 4GB ram. If I had an RX480 I would certainly want the 40 odd FPS back...tiz a no brainer in my book and should be for all AMD users with more than 4GB of Vram.

I have always said that Devs should write games to get the best out of both Nvidia and AMD cards...as long as those proprietary features can be turned off (if detrimental to the other card) by users who wish to do so.

However, this is not one of those cases.

Guru3D should indeed post another set of results with the shader cache on for all cards over 4GB and off for all cards 4GB and under. Alternatively post a set of results regardless of Shader Cache On/Off that gives the best FPS score for each card. This way you are showing all cards to the best of their ability, which is what would happen if you owned that card.

:)
 
Not disputing that the benchmarks are a bit sus but, That statement has blown my mind tbh :confused:

When they test cards should they remove Mem/cores/clocks/whatever to bring them in line to make it fair ?

I know I've refereed to hardware but surely the difference seen on AMD is due to hardware advantage or lack of on NV cards being such a big difference ?

You asked me this plenty so now its my turn, how do YOU feel about giving up your 290X for the 1080? :D:p
 
To me it is plain as the nose on your face.

A table of results where the shader cache is on for all cards above 4GB of Ram and Off for all cards below 4GB of Ram.

Unfortunately this table would indeed show AMD in a better light overall and that is the part that a few of you simply cannot stomach.

This Shader Cache feature is available on both sets of cards and is not a proprietry tech to either.

Greg to say that the review was "Fair" with all cards turning off shader cache then that totally shatters the fence you swing both ways from. Come on fella admit it, AMD are using the shader cache feature much better than Nvidia.

The thing is, this shouldnt be about leaving out benchmarks to suit one set of cards over the other. It should be about the effects of Shader Cache and Graphics Card Memory capacity as that is what has been discovered here.

The sheer fact that the RX480 loses well over 40 FPS is incredible in this instance and if this was the 1060 we would have seen a totally different reaction from Nvidia users in this thread.

:eek:



You say it's plain, yet you're not even sure on what the setting in question is called. Says all one needs to know about this little escapade lol.
 
Of course you would....wouldn't you?

In my case (If I get the game) I will turn it off as my Fury has 4GB ram. If I had an RX480 I would certainly want the 40 odd FPS back...tiz a no brainer in my book and should be for all AMD users with more than 4GB of Vram.

I have always said that Devs should write games to get the best out of both Nvidia and AMD cards...as long as those proprietary features can be turned off (if detrimental to the other card) by users who wish to do so.

However, this is not one of those cases.

Guru3D should indeed post another set of results with the shader cache on for all cards over 4GB and off for all cards 4GB and under. Alternatively post a set of results regardless of Shader Cache On/Off that gives the best FPS score for each card. This way you are showing all cards to the best of their ability, which is what would happen if you owned that card.

:)

Judging by how aggressive he got on the Guru3D forum when someone challenged him I suspect he will do no such thing.
 
Back
Top Bottom