• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

3D Mark Time Spy not using true A-Sync, Maxwell A-Sync switched off!

Score 0 with NVIDIA GeForce GTX 1080(1x) and Intel Core i7-6850K
:eek: That a high score ...:p


I take it we meant to be looking your graphic score and CPU scores and not the final scores ?

A custom run will always score 0 and switching Async off is a custom run.
 
Wonder what the outcome will be, was under the impression that DX 12 can have independant branches of code to optimise different architectures, so it will be interesting to see if they work with AMD(who possibly hasn't had as much input in comparison to Nv for all we know) to leverage a higher level of optimisation that leverages AMD hardware better.

On the subject of 'gimping', if you jump on the newest highest end Nv gpu when they release, you will never witness 'gimping'-ever.

Others that run older gen Nv (loads)complaining about 'gimping'-using W3 as an example, there was plenty defending Nv saying they hadn't 'gimped' performance on Kepler, but it was that bad at damaging Nv performance the W3 dev introduced a tessellation overide for Nv users before Nv 'fixed' Kepler performance with a driver.

I wonder how many upgrades they got in those three months nudge nudge, wink wink?

Superb business practice as they tend to walk a thin line and get away with it.

Note-I'm not complaining that they do it, just pointing out the obvious, highly publicised and talked about reasons plenty believe they do especially the ones directly affected by it.
 
Last edited:
It seems strange that a next gen DX12 benchmark wouldn't support true Async. I thought something was a little off when comparing the differing card performance in the benchmark thread. I actually thought to myself "Does this support Async?" and actually had a google, sure enough I saw some controversy about this very thing. From the looks of the material I looked over, this could be very detrimental to async performance on AMD cards and obviously only AMD cards as only they have the capability to run true independent parallelism.

It all looks rather suspicious but I'd like to see an official statement from AMD in regard to how much performance this takes away from them. I've seen some tools recently, however which make me think indipendant reviewers could actually look at this.
 
I'd like to think the devs would use every single DX12 feature available on current hardware at the very least, this is going to get interesting.

No matter anyone's opinion whether true or false, dodgy as driver level 'optimisations' by vendors in the past did happen before on 3DMark, it's a massive accusation for Future Mark.
 
Futuremark has made some questionable designs in the past - and come on folks - this wouldn't be the first time Nvidia been caught cheating at 3Dmark; they've been caught a couple times. So did ATI at a couple times - there were major uproars at the time and rightly so.

Looks like Nvidia lied about a-sync abilities on Maxwell - looks like as predicted by a few; pascal is just Maxwell updated to handle a-sync better........ Are people actually surprised Nvidia lied? I mean come on; getting the truth out of Nvidia actually more rare these days; than say Virgin prom Queen in California ;)

AMD isn't perfect by any stance; but its been a while since I've seem them just out and out lie about abilities of their cards......unlike Nvidia......
 
Futuremark has made some questionable designs in the past - and come on folks - this wouldn't be the first time Nvidia been caught cheating at 3Dmark; they've been caught a couple times. So did ATI at a couple times - there were major uproars at the time and rightly so.

Looks like Nvidia lied about a-sync abilities on Maxwell - looks like as predicted by a few; pascal is just Maxwell updated to handle a-sync better........ Are people actually surprised Nvidia lied? I mean come on; getting the truth out of Nvidia actually more rare these days; than say Virgin prom Queen in California ;)

AMD isn't perfect by any stance; but its been a while since I've seem them just out and out lie about abilities of their cards......unlike Nvidia......

They're both as bad as each other, It's a dog eat dog competitive 2 horse race, AMD's last infraction was the Overclocker's dream claim, That alone tells you how far they are willing to go with the lies and half truths. It's up to us to see through it all.
 
They're both as bad as each other, It's a dog eat dog competitive 2 horse race, AMD's last infraction was the Overclocker's dream claim, That alone tells you how far they are willing to go with the lies and half truths. It's up to us to see through it all.

That one was bad but if you look he was an engineer that wasn't used to speaking in that large of a crowd.....but at the same point - it shouldn't of ever been said - fury did oc - and some well but not like a dream :D

But if we look at both companies - I'd have to lean Nvidia is worse in flat out lieing to customers - people also need to remember Nvidia is not above dumping faulty chips on the market and not give a damn how much it costs their users or partners when they fail.....;)
 
They're both going to try their hardest to bend the truth in regards to 'overclockers dream'/'twice as fast as TX' but not even remotely close on the amount of times they've been 'found out'.:p
 
Supposedly FM devs are saying the benchmark adheres to DX12 feature set 11_0. 11_0 misses out loads of key features, 12_0 has all the most important ones. If true the benchmark is effectively a joke in terms of calling it at DX12 benchmark, it makes it more of a DX11.3(and barely that) benchmark.
 
Supposedly FM devs are saying the benchmark adheres to DX12 feature set 11_0. 11_0 misses out loads of key features, 12_0 has all the most important ones. If true the benchmark is effectively a joke in terms of calling it at DX12 benchmark, it makes it more of a DX11.3(and barely that) benchmark.

+1, its a complete copout in term of a DX12 benchmark, its not DX12 really.
 
Supposedly FM devs are saying the benchmark adheres to DX12 feature set 11_0. 11_0 misses out loads of key features, 12_0 has all the most important ones. If true the benchmark is effectively a joke in terms of calling it at DX12 benchmark, it makes it more of a DX11.3(and barely that) benchmark.

+1, its a complete copout in term of a DX12 benchmark, its not DX12 really.

Here's the post from the dev.
http://forums.anandtech.com/showpost.php?p=38363396&postcount=82

3DMark Time Spy engine is specifically written to be a neutral, "reference implementation" engine for DX12 FL11_0.

On another interesting note Time Spy looooves tessellation :O
Well over twice as many triangles in Time Spy Graphics Test 2 as Fire Strike Graphics Test 2, and almost five times as many tessellation patches between Time Spy Graphics 2 and Fire Strike Graphics Test 1.

I didn't know it was such used feature in DX12/ FL11_0
Looks like it can handle a tonne more geometry, without too much of a performance impact. I wonder when that could translate into games built from the ground up using DX12.

WLZClVj.png
 
Last edited:
Here's the post from the dev.
http://forums.anandtech.com/showpost.php?p=38363396&postcount=82



On another interesting note Time Spy looooves tessellation :O
Well over twice as many triangles in Time Spy Graphics Test 2 as Fire Strike Graphics Test 2, and almost five times as many tessellation patches between Time Spy Graphics 2 and Fire Strike Graphics Test 1.

I didn't know it was such used feature in DX12/ FL11_0
Looks like it can handle a tonne more geometry, without too much of a performance impact. I wonder when that could translate into games built from the ground up using DX12.

WLZClVj.png

It is pretty natural but natural is not what its meant to be, its meant to push GPU's for all they are worth, its not doing that, instead some sort of middle ground to make all cards look equal.

FM have become far too commercialised, far too concerned with vendor controversy.
 
Such salt. When a game has AMD branding all over it and shows their hardware having a distinct advantage (to the extent of Hitman showing lower tier AMD hardware beating higher tier NV hardware in DX11) that's fine. But as soon as somebody makes a test that is vendor neutral certain people **** the bed.

I eagerly await the response from FM that will torpedo the tinfoil hat brigade. I wonder what crazy conspiracy they will move to next.
 
Such salt. When a game has AMD branding all over it and shows their hardware having a distinct advantage (to the extent of Hitman showing lower tier AMD hardware beating higher tier NV hardware in DX11) that's fine. But as soon as somebody makes a test that is vendor neutral certain people **** the bed.

I eagerly await the response from FM that will torpedo the tinfoil hat brigade. I wonder what crazy conspiracy they will move to next.

Its not really a benchmark if its not pushing the cards.
 
Cant say im suprised to be honest, Concurrent is probably the best middle ground for both card vendors, under it Nvidia cards work and AMD cards work, rather if it was true A-Sync and basically only AMD cards getting the benefit it would look bad on someone (Nvidia or 3D Mark? you choose)

Agreed, but compromise in the middle is not always the right approach to take. What it should decide what to use in its tests or not (by default, at least), should be based around what is representative of games technology today. I.e. are we seeing or expecting ASync to be commonplace? Then it should be on. Do we think nobody will bother? It should be off. Everything in between is a judgement call.

That's what makes the test fair or not, rather than what is a good compromise between vendors. This shouldn't be a round of golf where one party volunteers to play with a handicap. It's more like an assault course where if one party can't get over the wall, tough on them.
 
Back
Top Bottom