• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ashes of the Singularity Coming, with DX12 Benchmark in thread.

I've still not seen any results where the end result isn't even - even if nVidia isn't processing it via an async compute path :S
 
^^ If you take overclocking out of the equation though its broadly even. (I'm sure there is a cheap shot about Fury X and overclockers dream in there somewhere).
 
In the end it's probably a good thing that Oxide alerted NVidia to this issue (very publically I might add) so that they could quickly rectifiy it (even though NVidia are currently performaing fine without it) and in the meantime due to the marketing blitz AMD have managed to sell 8 Fury X's so it's been win-win for everyone.
 
^^ If you take overclocking out of the equation though its broadly even. (I'm sure there is a cheap shot about Fury X and overclockers dream in there somewhere).

My card does 1420 out of the box, that still nets me a good chunk faster than the top scoring FuryX, thats the problem with saying "stock" performance on a 980ti, most of the cards on sale are not reference like the review sites used
 
[fanboymode]Just read this and proper laughed (after looking at the AOTS bench thread)


http://www.pcgamesn.com/ashes-of-th...y-support-async-compute-with-software-drivers
[/fanboymode]

:D

How about person with brains mode?
Since on Ashes leaderboard no one is running identical systems, it would be plain stupid to start summarizing and concluding that this or that card is the fastest. It really amazes me, that things like core count, CPU clocks, CPU NB clocks, RAM speeds, CPU model/gen are completely ignored in this really CPU dependent benchmark. But it does not stop same people ridiculing review sites who actually used identical systems with identical settings with different GPUs to get the results they got.
It is no surprise that higher clocked CPU or cpu with more cores and higher clocked GPU will be faster as a system than lower clocked systems. There is no point in fanfaring about how this or that is faster, even though it was influenced by other factors in the system.

Also it amazes me, how so called enthusiasts, who want to measure their e-pen*ses with these leaderboards lack enthusiasm to run the damn benchmark in all possible resolutions (this applies to all the benchmark threads). I mean how hard is it to flip a switch in CCC/whatever nvidia calls their control center and enable VSR/DSR?
:confused:

/rant
 
I game at 1440P and have run it at 1080P and 1440P so not sure what else I can do? Maybe underclock my CPU and GPU to make it look fairer?
 
How about person with brains mode?
Since on Ashes leaderboard no one is running identical systems, it would be plain stupid to start summarizing and concluding that this or that card is the fastest. It really amazes me, that things like core count, CPU clocks, CPU NB clocks, RAM speeds, CPU model/gen are completely ignored in this really CPU dependent benchmark. But it does not stop same people ridiculing review sites who actually used identical systems with identical settings with different GPUs to get the results they got.
It is no surprise that higher clocked CPU or cpu with more cores and higher clocked GPU will be faster as a system than lower clocked systems. There is no point in fanfaring about how this or that is faster, even though it was influenced by other factors in the system.

Also it amazes me, how so called enthusiasts, who want to measure their e-pen*ses with these leaderboards lack enthusiasm to run the damn benchmark in all possible resolutions (this applies to all the benchmark threads). I mean how hard is it to flip a switch in CCC/whatever nvidia calls their control center and enable VSR/DSR?
:confused:

/rant

There are people with the same CPU running the same clock speeds on the leaderboard, and there are people with different clock speeds running the same gpu, you can see very clearly that people are GPU limited and that the bench scales pretty linearly with GPU speed, you also have a six core CPU beating an eight core

Keep clinging though, despite all the evidence proving that's poppycock

If you go back and check, I even did a DSR run for you at 4K, 22fps in DX12
http://forums.overclockers.co.uk/showpost.php?p=28537973&postcount=206
I can add my 24fps DX11 run as well if you like

Also, people who post their system specs in their sig are quite literally throwing stones living in a glass house accusing other people of epeen

Dsr/vsr scores aren't accepted on bench threads as the score aren't always the same as native runs and that's become the forum rules :rolleyes:
 
Last edited:
I game at 1440P and have run it at 1080P and 1440P so not sure what else I can do? Maybe underclock my CPU and GPU to make it look fairer?

VSR/DSR. enable in your control center, and set resolution higher than your screen supports. Though I did hear that nvidia DSR settings are not as straight forward as AMD ones. Something to do with 2x/4x sliders. I might be wrong, since I don't have nvidia card.
For AMD CCC, you just enable VSR and start the game, set the res you want, and off you go ;)
 
VSR/DSR. enable in your control center, and set resolution higher than your screen supports. Though I did hear that nvidia DSR settings are not as straight forward as AMD ones. Something to do with 2x/4x sliders. I might be wrong, since I don't have nvidia card.
For AMD CCC, you just enable VSR and start the game, set the res you want, and off you go ;)

But those VSR/DSR scores are for the most part worthless as its not the same as running that resolution properly. So you are now comparing someones VSR/DSR score against someones native resolution despite one being easier to run than the others. Thats why bench threads dont allow them as its inaccurate
 
Yup - not sure what the performance penalty is like on VSR as it works a little differently but DSR generally runs a touch slower than running that res natively - though it is only a very small drop.
 
Yup - not sure what the performance penalty is like on VSR as it works a little differently but DSR generally runs a touch slower than running that res natively - though it is only a very small drop.

yes, this is the case. DSR has a bit of the drop, while I am not sure about VSR.
So I need to ask Triss a question: why worthless? Is nvidia DSR users are so afraid to lose 1-2fps in e-pen*s measurement?

The interesting part is I cannot any tests in regards to it. I can find dsr vs vsr tests, but not vsr/dsr vs native :/

I thought that benchmarking threads would not allow vsr/dsr due to them performing faster than native 4k, but this is not the case, so where is the harm in using it? Unless you guys pretend to think you are running some sort of scientifically accurate research and gathering data :D

In this case, you guys need to remove my 4k/1440p submissions since my fury x receiving penalty for extra work done while upscaling/downscaling :D
So I guess nvidia owners triumph in Ashes of Singularity 4k tests are not valid, since my card would run a bit faster in native 4k :D
 
yes, this is the case. DSR has a bit of the drop, while I am not sure about VSR.
So I need to ask Triss a question: why worthless? Is nvidia DSR users are so afraid to lose 1-2fps in e-pen*s measurement?

The interesting part is I cannot any tests in regards to it. I can find dsr vs vsr tests, but not vsr/dsr vs native :/

I thought that benchmarking threads would not allow vsr/dsr due to them performing faster than native 4k, but this is not the case, so where is the harm in using it? Unless you guys pretend to think you are running some sort of scientifically accurate research and gathering data :D

In this case, you guys need to remove my 4k/1440p submissions since my fury x receiving penalty for extra work done while upscaling/downscaling :D
So I guess nvidia owners triumph in Ashes of Singularity 4k tests are not valid, since my card would run a bit faster in native 4k :D

Why is this even about Nvidia :confused:

Be nice if people didnt have to drag vendors into every argument

It is no surprise that higher clocked CPU or cpu with more cores and higher clocked GPU will be faster as a system than lower clocked systems. There is no point in fanfaring about how this or that is faster, even though it was influenced by other factors in the system
.

VSR/DSR is one of the other influencing factors compared to using native resolution that you dont seem to like

I'd suggest If you if dont like how most benchmarks threads are run make your own
 
Aren't you are wise guys (it's in your title :D ).
Why is everyone pitching brand wars to me today?
The only reason I mentioned nvidia is because it is kinda proven that DSR of NVIDIA, not AMD has a small performance penalty. There is no evidence about VSR. If there was I would have included that in my original post. I did not intend to single out nvidia. And since DSR (and let's assume VSR of AMD) are penalized a little bit (nothing major as I understand one or two fps, within margin of error, no? yes?), so the only reason VSR/DSR are not allowed is because certain people are afraid, that 1-2fps will skew the results so much that either their systems will look worse than others, or people will not be able to draw definite conclusions on who's system is best of all.
I for one consider small penalty a non issue, and more results the better.
And as I mentioned earlier, anyone who are owners of benchmark threads I am participating are more than welcome to delete my entries, since they all are achieved with AMD VSR, so telling me to make my own thread is very rude of you, since I am not telling anyone that their thread is bad and wrong. If I thought like that I wouldn't take part in those festivities. Since when is suggestions taken as dissatisfaction? Forum goals are to learn and share, not to stick to stick to something you think is right and ignore everyone and be angry at anyone who asks about one rule or another.

So can we stop with all that brand warring, thread dissatisfaction calling on me, because if nvidians are allowed to take a stab at AMD in their own threads, why am I being called out then when I take a stab at nvidia once in a while.
Poor nvidia, it is being bullied in internets :(
 
No one is being rude, except maybe the person who cant make a post without saying penis

We've explained what the accepted rules have become for bench threads and said that if you don't agree with those rules you are free to start a new bench thread, how is that rude?
We are being inclusive of new ideas if you have any.
 
VSR/DSR. enable in your control center, and set resolution higher than your screen supports. Though I did hear that nvidia DSR settings are not as straight forward as AMD ones. Something to do with 2x/4x sliders. I might be wrong, since I don't have nvidia card.
For AMD CCC, you just enable VSR and start the game, set the res you want, and off you go ;)

DSR and VSR don't give true results. Even AMDMatt pointed out that you get slightly more frames running VSR, so to me, that is not a fair score and shouldn't be allowed. If you have entered VSR scores in the bench threads, you should let the thread starter know really.

As for Nvidia DSR, I can go up to 5K on my 1440P ROG Swift but only as high as 3200x1800 on VSR with my Fury X. Loads of choices on resolution with Nvidia and 2.25X my resolution gives me UHD resolution.

http://forums.overclockers.co.uk/showpost.php?p=28325429&postcount=7087

Not quite, but it's within 5fps or so. It's a good indication of expected performance, but it's not as demanding as the real thing. :)
 
Last edited:
No one is being rude, except maybe the person who cant make a post without saying penis

We've explained what the accepted rules have become for bench threads and said that if you don't agree with those rules you are free to start a new bench thread, how is that rude?
We are being inclusive of new ideas if you have any.

If we can find a benchmark that AMD hammers Nvidia he will then be happy, it is as simple as that. I wouldn't mind there appears to be hardly anything between the top cards be they AMD or Nvidia within the only DX12 benchmark we have.
 
Now , take a look at these results. I borrowed 2400MHz DDR3 memory modules from my friend (was impressed and made a trade with him).

Here's results with same settings ([email protected] + Fury X@1100/550MHz

DDR1600

1080p-high-1600MHz_zpsx5y5po9w.png

DDR2400

1080p-high-2400MHz_zpsgdesn1yg.png

Now, it's quite safe to say that I'm memory bandwith limited on DDR1600 on my I7-3770k in this game. I'm impressed.
 
Now , take a look at these results. I borrowed 2400MHz DDR3 memory modules from my friend (was impressed and made a trade with him).

Here's results with same settings ([email protected] + Fury X@1100/550MHz

DDR1600

1080p-high-1600MHz_zpsx5y5po9w.png

DDR2400

1080p-high-2400MHz_zpsgdesn1yg.png

Now, it's quite safe to say that I'm memory bandwith limited on DDR1600 on my I7-3770k in this game. I'm impressed.

so how about me then? I'm sitting with 3000Mhz DDR4 at quite low latencies, with 4 channels for memory and CPU memory controller clocked at 4ghz ;)
 
Now , take a look at these results. I borrowed 2400MHz DDR3 memory modules from my friend (was impressed and made a trade with him).

Here's results with same settings ([email protected] + Fury X@1100/550MHz

DDR1600

1080p-high-1600MHz_zpsx5y5po9w.png

DDR2400

1080p-high-2400MHz_zpsgdesn1yg.png

Now, it's quite safe to say that I'm memory bandwith limited on DDR1600 on my I7-3770k in this game. I'm impressed.

That's quite interesting, what sort of variance do you normally get from one run to the next, just so we know the margin of error. good improvement though.
 
Back
Top Bottom