• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

He gave links to those benchmarks screen grabs posted at the bottom of the article. The images are taken from the live show at the top of the article, the live show benchmark numbers are higher. They specifically stated the details used from the show yesterday the 1st of June, using the same version and same settings in 1080p.

Those screen grabs of the benchmark results at the bottom are other benchmark results run several days before and are not the ones in question. The question is why the images don't match, the images are from yesterdays benchmarks, the benchmarks run on both cards yesterday used the same version of the game.

You can run the same benchmark more than once at different times, the comparison yesterday on stage was the same version of the game and with completely different performance as shown in those benchmarks dated a week ago. They didn't publicly compare their own performance to Nvidia in different versions of the game, there just happen to be multiple different benchmarks with different versions in the database, what matters is if you compare performance directly, even more so if it's publicly and on stage, they use the same version and settings which they did.
 
He gave links to those benchmarks screen grabs posted at the bottom of the article. The images are taken from the live show at the top of the article, the live show benchmark numbers are higher. They specifically stated the details used from the show yesterday the 1st of June, using the same version and same settings in 1080p.

Those screen grabs of the benchmark results at the bottom are other benchmark results run several days before and are not the ones in question. The question is why the images don't match, the images are from yesterdays benchmarks, the benchmarks run on both cards yesterday used the same version of the game.

You can run the same benchmark more than once at different times, the comparison yesterday on stage was the same version of the game and with completely different performance as shown in those benchmarks dated a week ago. They didn't publicly compare their own performance to Nvidia in different versions of the game, there just happen to be multiple different benchmarks with different versions in the database, what matters is if you compare performance directly, even more so if it's publicly and on stage, they use the same version and settings which they did.

A simple sorry i wasnt paying attn and yes those ones linked and specifically asked about did infact have a different version number would have sufficed :p

Matts already trying to find the live ones, they dont appear to be easy to find on the AToS site from the same user
 
A simple sorry i wasnt paying attn and yes those ones linked and specifically asked about did infact have a different version number would have sufficed :p

Matts already trying to find the live ones, they dont appear to be easy to find on the AToS site from the same user

Yeah, except, you were saying it was weird they used different version numbers in regards to the image quality, the thread and everything else. Older benchmarks having different versions is entirely irrelevant. THe presented information from AMD, and the benchmarks run in the presentation and the performance figures shown were all on the same version.

That benchmarks run previously have different version numbers means nothing.

Should we remember exactly what you said...

Interesting they even used different versions of the game. Not saying that makes a difference just thought if you were presenting to the world you would want it all the same

You were specifically talking about the ones they were presenting to the world... you were wrong, so yeah a simple sorry you were wrong would be fine.
 
But then we are also told the ashes developers haven't even written the dx12/ async coding for pascal yet so who knows what to believe!
 
So to improve on DX12 performance, Nvidia had to take a few shortcuts?



Really I think we need to wait for actual 480 reviews, this Ashes debate is going nowhere and I'm still confused on the 51% figure, it's not really clear and I can't see how it is 151% scaling.

Or the developers have a programming bug.

If something isn't rendering correctly then it is either a driver but or a developer bug. 99% of the time it is the developer to blame. Moreover, the developer should be testing the functionality and if they are sure their code is correct then let nvidia know if a potential driver bug and seek assistance in resolving the problem.
 
Maybe i will get an answer...

Why not just use a single card? I don't get it, 51% scaling across two card implies only half the performance is being used to beat the 1080, so why not use a single card? It just prompts a cynical view of the whole thing, for goodness sake your putting this card on sale for $200, thats awesome, especially if this benchmark is true. http://www.3dmark.com/3dm11/11263084 For $200 no one expects it to match a 1080 in anything, but if that 3DMark submission is anything to go by the performance is there, 15% shy of a 980TI, wow! i want one. Just do a straight up card for card comparison, this strangeness that throws up nothing but questions and criticisms is not a good look..

https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/
 
Last edited:
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/d3sw31g

EDIT: To clarify this, the scaling from 1->2 GPUs in the dual RX 480 test we assembled is 1.83x. The OP was looking only at the lowest draw call rates when asking about the 51%. The single batch GPU utilization is 51% (CPU-bound), medium is 71.9% utilization (less CPU-bound) and heavy batch utilization is 92.3% (not CPU-bound). All together for the entire test, there is 1.83X the performance of a single GPU in what users saw on YouTube. The mGPU subsystem of AOTS is very robust.

So basically they just grabbed the lowest value that the GPU usage was at. 71.9% looks far more realistic.
 
i ##### hate Reddit, refresh the page and my post has repositioned its self somewhere in the middle of that wall of replies, not at the top or bottom as a new post but randomly mixed in with no positioning logic what so ever.
 
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/d3sw31g



So basically they just grabbed the lowest value that the GPU usage was at. 71.9% looks far more realistic.
EDIT: re-thinking and I'm probably wrong about mixing the GPU-bound percentage and the FPS from different settings.

That explains, what ridiculous number AMD pull out.

This actually paints bad picture for AMD so I'm not sure why they would want to show this. So a 1080 doesn't get CPU bound in the single batch mode and can better leverage its potential while the 480 inherits the same problems that GCN < 1.4 has in not reaching the theoretical capability. Even weirder, is the reported FPS from the single batch mode? Doesn't seem like it. So they mix and match the numbers from different settings. Mistakingly thinking that the 51% GPU-bound number is good when it's actually bad and cherry picking that with a higher FPS from a Different setting.


At least we get the true scaling factor (1.83x good for AotS) and can then see the true single card performance.



Someone please tell me I'm wrong here:confused:
I really don't want to sound like I'm attacking AMD on this but the above is my sleep deprived understanding.
 
Last edited:
Whelp im not basing anything on that AOTS bench as its pretty unreliable, really need to see single card performance in both DX11 and DX12 to make any sort of judgement call on this 480.
 
Yup, I'm glad they've been very forward on the benchmark results.

Relooking the OP on the Reddit thread, the benchmark seems to be split into 3 different tests, the first to purposely CPU bound, the second an average of both, and the last more GPU heavy.

Which then makes me wonder why the 1080's utilization for the CPU bound test is so... high.
 
Back
Top Bottom