• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fable Legends: AMD and Nvidia go head-to-head in latest DirectX 12 benchmark

What i want to see is more mid range builds with cards like my 670 in benchmarks not 980s and furys with the lowest being a 960, and the latest cpus with i7s mainly, all the damn time in benchmarks. It's like they think PC users ONLY have the latest things. When most have a little bit older but still really good hardware like the i5 3570k and 670 for examples.
 
Basically what i see people waiting for is a game which probably isn't endorsed or supported by a certain gpu manufacturer (AMD/nVidia) Which is optimized well without bug ridden problems and can switch between DX11 and DX12. So it provides a fair environment to bench and provide both DX11 and DX12 results for both vendors or cards and a multitude of different settings and resolutions Most likley from 1080p low - high all the way to 4k low/high.

Yea... I can't see this happening for a while. Too many DX12 games will be jumped on by bother vendors early to promote thier cards more with DX12 in the hopes they will get more traction with this DX12 battle that is going on and with DX12 being new API it will take time for games to be produce without new bugs etc.
 
What i want to see is more mid range builds with cards like my 670 in benchmarks not 980s and furys with the lowest being a 960, and the latest cpus with i7s mainly, all the damn time in benchmarks. It's like they think PC users ONLY have the latest things. When most have a little bit older but still really good hardware like the i5 3570k and 670 for examples.

Read old benchmarks?
 
Read old benchmarks?

For dx12 stuff?

Just saying dx12 benchmarks shouldnt be purely aimed at the latest and greatest cards when a lot of older ones will be able to use dx12 stuff too just maybe not all features but the majority of dx12 that has fps boosting effects i assume.
 
I really don't like the way these guys are so apologetic for the results and the fact that they didn't have a 1350Mhz boosting 980TI.

Scared stiff of their results upsetting Nvidia.
 
I really don't like the way these guys are so apologetic for the results and the fact that they didn't have a 1350Mhz boosting 980TI.

Scared stiff of their results upsetting Nvidia.

It's just basically more gamers have nVidia cards so in essence your going to find a lot more nVidia fan boys or hardcores which are going to get upset more easily when it dosn't go in nVidia's favor. If the shoe was on the other foot then your going to be upsetting far less people.

But gotta admit im seeing a lot more upset people when its negative news about nVidia nowa days all over the place.

I didn't see much wrong with the results from them tbh. Reference vs reference? When other review sites have those big overclocked 980Tis pulling ahead isn't that kinda obvious?
Could argue that well you can overclock a 980Ti by this much Fury x you cant but there was no misinformation or misleading people just upset angry people lol. They had to make a statement just to make all these people happy. Pathetic tbh reviewers should have balls and not be swayed by public or company.
 
My only fear with that is how fair they actually are, if they are apologising for not having a top clocking card then they are apologising for not having skewed results. ^^^^ they never tell you the card they are using is boosting 30% above reference, and IMO thats deliberate skewing.

Mine does that out of the box, its boosting to 100Mhz over the stated max boost, if your not going to tell people they are already boosting to just about max overclock then you are over stating its performance as one which boost that high is not the norm, it also gives the impression there is a tone of overclocking over it when in fact there is very little.
-------

I see there yet another conversation about Async.

I have a little experience with it, Nvidia does have ASync, i know that because i use it in Volxel Global illumination.
The pre-render is calculated on the CPU A-Synchronously, The GPU then projects Shadow Maps for Occlusion and Lighting through Ray-Tracing.

It Works but the result does have a heavy hit on the GPU, about 2ms per frame worth of render cost on a clocked GTX 970, significantly more if you have a lot of shaded poly's.

Let me tell you Async on this current generation of GPU's is not great, its not something these current GPU's are designed for, the performance hit is significant but even if you have enough power to drive through it for high Frame Rates the experience is not smooth.

Being capable of it is one thing but doing it well is quite another.

In my experience and opinion we may have to wait for Pascal to get better A-Sync hardware before Ray Tracing GI becomes completely viable, its too much to ask at the moment, it can be tweaked to be less demanding but its at the cost of its effects, and whats the point in that?
 
Last edited:
I see there yet another conversation about Async.

I have a little experience with it, Nvidia does have ASync, i know that because i use it in Volxel Global illumination.
The pre-render is calculated on the CPU A-Synchronously, The GPU then projects Shadow Maps for Occlusion and Lighting through Ray-Tracing.

It Works but the result does have a heavy hit on the GPU, about 2ms per frame worth of render cost on a clocked GTX 970, significantly more if you have a lot of shaded poly's.

Let me tell you Async on this current generation of GPU's is not great, its not something these current GPU's are designed for, the performance hit is significant but even if you have enough power to drive through it for high Frame Rates the experience is not smooth.

Being capable of it is one thing but doing it well is quite another.

In my experience and opinion we may have to wait for Pascal to get better A-Sync hardware before Ray Tracing GI becomes completely viable, its too much to ask at the moment, it can be tweaked to be less demanding but its at the cost of its effects, and whats the point in that?

Yea i agree!
Even still though i can't see it becoming a issue tbh. When actual games release either could happen. The devs will reduce the amount of Async operations on nVidia hardware to a suitable level so it dosn't impact performance while maintaining Async functionality.
nVidia will request this or something similar if devs havn't done this or
nVidia will implement a driver sided fix similar to how you can reduce tessellation in AMD CCC for AMD hardware.

With Pascal i'm certainly expecting for nVidia to be prepared for DX12 along with Async compute.
At the moment we are seeing benchmarks which are demonstrating DX12 API more rather than actual game performance. This is something nVidia are trying to get across. When games hit with DX12 we can start looking at DX12 performance more.
It's also why perhaps Aysnc compute was used a lot more in Ashes bench to try and demonstrate what DX12 can do, never intended to put across nVidia cards can't run DX12 as well as AMD lol.
 
Yea i agree!
Even still though i can't see it becoming a issue tbh. When actual games release either could happen. The devs will reduce the amount of Async operations on nVidia hardware to a suitable level so it dosn't impact performance while maintaining Async functionality.
nVidia will request this or something similar if devs havn't done this or
nVidia will implement a driver sided fix similar to how you can reduce tessellation in AMD CCC for AMD hardware.

With Pascal i'm certainly expecting for nVidia to be prepared for DX12 along with Async compute.
At the moment we are seeing benchmarks which are demonstrating DX12 API more rather than actual game performance. This is something nVidia are trying to get across. When games hit with DX12 we can start looking at DX12 performance more.
It's also why perhaps Aysnc compute was used a lot more in Ashes bench to try and demonstrate what DX12 can do, never intended to put across nVidia cards can't run DX12 as well as AMD lol.

Yes you're right, i can be tweaked but its messing about trying to find a good effect for the right performance and frankly if it was me i wouldn't want to water it down as much as you would need to. this unless your game is going to say to people "you can't run this game with anything less than a GTX 770 and even then don't expect good performance"

I would love to upload what i have done so far but its 11GB and there are some unfinished things in it, A lot in fact..... and glitches that i want to sort before anyone else gets a look at it. it will take some time.
 
Voxel GI, ASync Ray Tracing, not the best examples, works best with reflective surfaces but you can see the lighting and shading looks a lot more natural.

Off


On


Off


On


Off


On
 
Last edited:
What i want to see is more mid range builds with cards like my 670 in benchmarks not 980s and furys with the lowest being a 960, and the latest cpus with i7s mainly, all the damn time in benchmarks. It's like they think PC users ONLY have the latest things. When most have a little bit older but still really good hardware like the i5 3570k and 670 for examples.
Agreed the trouble is this, Someone posted on a small U-tube sites video review where the fella was testing a 980 and 980ti and had included the 780 and 290x resullts.

A post from one chap said "The fella who runs this site is brain dead, Why the Bleep is he showing 780 results?"

I had to explain that maybe it's because people with those cards will want to see how much of an improvement the new cards provide. It's the same as you said with older cards too, Sadly the so called tech crowd watching the reviews often tend to be gormless kids with no real interest in really buying the cards or knowledge of the continuing development. They just want to see graphs with "one card destroying the other" so they can say yeah I support them, It's like a bloomin football match.

Graphs should show examples from the last couple of gens so say a total of three minimum so now it would be 7000, 200 and 300 for AMD and 600, 700 and 900 from Nvidia.
 
It's also why perhaps Aysnc compute was used a lot more in Ashes bench to try and demonstrate what DX12 can do, never intended to put across nVidia cards can't run DX12 as well as AMD lol.

How have they used Async more in AOTS?

And have you actually looked at the AOTS bench thread? lol :D
 
Read old benchmarks?

How would that help?
A, They won't have the latest games, and
B, The drivers won't be up to date which can make quite a difference sometimes, For example the 290 and 290x got some big improvements through drivers since they originally released, I'm sure there's plenty of graphics cards/chips that have.
 
How would that help?
A, They won't have the latest games, and
B, The drivers won't be up to date which can make quite a difference sometimes, For example the 290 and 290x got some big improvements through drivers since they originally released, I'm sure there's plenty of graphics cards/chips that have.

Its a problem with reviewers not retesting, the 290 / 390 is a classic example of this.

GTX 970 released and seemingly was way faster than a 290X, thats what the benchmarks said.

Then the 390/X appeared, the same card as the 290/X, tested on the latest drivers and it turns out the 390/290 surpassed the GTX 970, the 390X was more like a GTX 980.
 
Its a problem with reviewers not retesting, the 290 / 390 is a classic example of this.

GTX 970 released and seemingly was way faster than a 290X, thats what the benchmarks said.

Then the 390/X appeared, the same card as the 290/X, tested on the latest drivers and it turns out the 390/290 surpassed the GTX 970, the 390X was more like a GTX 980.

At 2160p the 290Xs have always had the edge over the GTX 980s.:)

The difference some driver updates can make to card sometimes mean it feels like you are running an entirely different card. As we all know the review sites don't go back and retest cards with newer drivers when they appear.
 




TheTechReport have also benched this and a similar showing to the other sites (except Extremetech for some reason). You can see how massively ahead the 980Ti is but curiously, the 390X is ahead of the Fury for some reason at 1080P?
 
Back
Top Bottom