• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon RX 480 "Polaris" Launched at $199

So Oxide Devs have come out and confirmed that AMD were telling the truth when they said the GTX 1080 was incorrectly rendering during their AoTS benchmark,

So much for the claims that AMD were "obviously" running it lower settings.

qKBP3kz.jpg
And so much for all the people going around saying that the only difference was just 'dynamic snow'.
 
doubt that some snow can have that much impact, it's most likely irrelevent, just conspiracy & co thought it would be a good point to argue about.

Not accusing you specifically of it, but it's funny that when people think it's AMD doing it it's not acceptable and needs uproar but when it's Nvidia it's fine and most likely irrelevant. At least that's the way most posts from a normal internet user seems to read on most forums/articles I check.
 
Last edited:
So Oxide Devs have come out and confirmed that AMD were telling the truth when they said the GTX 1080 was incorrectly rendering during their AoTS benchmark,

So much for the claims that AMD were "obviously" running it lower settings.

qKBP3kz.jpg

Interestingly Nvidia replied to that and said it was the "press driver".
 
Should be using Ashes bench for comparison really until things get smoothed out. Ashes bench is no indicator of performance in other games.
 
Should be using Ashes bench for comparison really until things get smoothed out. Ashes bench is no indicator of performance in other games.

i dont think ashes graph was there as performance indicator, otherwise they wouldn't have added that efficiency twist to it, there must be some new feature we dont really get yet untill we get more in depth reviews.
 
i dont think ashes graph was there as performance indicator, otherwise they wouldn't have added that efficiency twist to it, there must be some new feature we dont really get yet untill we get more in depth reviews.

Probably single pass multi projection for VR like a pascal got. This is how you get the "up to to 2.8x" performance increase per watt. If you get 70% from the process, 70% in VR and the usual generation evolution of 20 odd % put into PR terms its easy to make such big claims.
 
Probably single pass multi projection for VR like a pascal got. This is how you get the "up to to 2.8x" performance increase per watt. If you get 70% from the process, 70% in VR and the usual generation evolution of 20 odd % put into PR terms its easy to make such big claims.

im not talking about VR or eyefinity or overall polaris efficiency, but at the ashes graph AMD showed they wrote performance+efficiency, then added utilization difference between the 2 cards, which suggest it's something more to it than just performance.
most ppl dont get the graph like me, some others have it backward thinking about gpu overhead, and i think it might be a new feature, and i dont see how they would use multiprojection in that context, it was a single monitor.
 
im not talking about VR or eyefinity or overall polaris efficiency, but at the ashes graph AMD showed they wrote performance+efficiency, then added utilization difference between the 2 cards, which suggest it's something more to it than just performance.
most ppl dont get the graph like me, some others have it backward thinking about gpu overhead, and i think it might be a new feature, and i dont see how they would use multiprojection in that context, it was a single monitor.

The most likely explanation is AMD simply didn't understand the numbers the benchmarked displays, hence all the back tracking and corrections. It was some kind of marketing job gone horribly wrong.

The efficiency angle doesn't really fly. 2x150=300w cards is slightly faster than 1x180w.
 
It doesn't effect AMD cards, so the the fairly speculative conclusions of the 480 still stand. The 1080 results might go up or double 1-2%.

How will they go up? Surely they go down.
If the Press Driver wasn't rendering X amount = more FPS
The release Driver now Renders this X amount = Less FPS
 
The most likely explanation is AMD simply didn't understand the numbers the benchmarked displays, hence all the back tracking and corrections. It was some kind of marketing job gone horribly wrong.

The efficiency angle doesn't really fly. 2x150=300w cards is slightly faster than 1x180w.

oh come on a GPU Vendor not understanding benchmark numbers for their own GPU, you think AMD are that incompetent, assuming this is true which is insane to even consider, it was not presented by a marketing guy like hallock, but by an engineer head of the graphic division (koduri), even him didnt understand what those numbers were about ?
and 150watt is the maximum draw from pci-e+6pin, the card is more likely to draw 120-130watt, i think the 150TDP announcement is just PR, so it draws attention of reviewers and put more focus on it once they find it lower.
 
Last edited:
oh come on a GPU Vendor not understanding benchmark numbers for their own GPU, you think AMD are that incompetent, assuming this is true which is insane to even consider, it was not presented by a marketing guy like hallock, but by an engineer head of the graphic division (koduri), even him didnt understand what those numbers were about ?
and 150watt is the maximum draw from pci-e+6pin, the card is more likely to draw 120-130watt, i think the 150TDP announcement is just PR, so it draws attention of reviewers and put more focus on it once they find it lower.

D.P knows the difference between TDP and actual power consumption and yet he keeps citing the TDP as power consumption.

** keep the personal remarks to yourself please **
 
How will they go up? Surely they go down.
If the Press Driver wasn't rendering X amount = more FPS
The release Driver now Renders this X amount = Less FPS

If there was a driver bug just because something isn't rendered doesn't mean there wasn't a computional cost, and that cost may be higher that if it was done correctly.
 
oh come on a GPU Vendor not understanding benchmark numbers for their own GPU, you think AMD are that incompetent, assuming this is true which is insane to even consider, it was not presented by a marketing guy like hallock, but by an engineer head of the graphic division (koduri), even him didnt understand what those numbers were about ?
and 150watt is the maximum draw from pci-e+6pin, the card is more likely to draw 120-130watt, i think the 150TDP announcement is just PR, so it draws attention of reviewers and put more focus on it once they find it lower.



The fact that the AMD rep had to backtrack and then correct themselves severall times before finally explaining the 51% was from a single batch setting and was the time when GPU bound, not the utilisation which is entirely unknown.

So it is just a basic fact that the 51% utilisation on the slide and the AMD rep explaining 51% is actually 5 time GPU bound are 2 completely different things. One of those is wrong by definition.
 
Back
Top Bottom