Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So is there any benchmark comparisons with both the 6700K and 5960X running all cores at 4GHz?
I know that is holding the 6700K back but it would hopefully show a better comparison of thread scaling/performance.
If we are already at the point of the benchmark being GPU limited with those processors, then it will have to be done with 3 graphics cards something.
So is there any benchmark comparisons with both the 6700K and 5960X running all cores at 4GHz?
I know that is holding the 6700K back but it would hopefully show a better comparison of thread scaling/performance.
If we are already at the point of the benchmark being GPU limited with those processors, then it will have to be done with 3 graphics cards something.
oh AMD runs dx12 better btw.
Said it a long time that nvidia sold guys old tech with the 980ti.
Got to love the fan boys
was thinking to buy a 1100 euro card but then I come to my senses.
AMD FTW![]()
Easy with the logic there dude, This forum doesn't take kindly to logic and common sense, We must fanboy everything in favor of Nvidia and Intel 110% all the time !!!
![]()
Because logic dictates we assume the same situation will exist for all DX12 games using our highly intensive sample of one
Even if AMD matches Nvidia in DX12 that is a big plus. It gives consumers more choice and leads to better competition. Isn't that a good thing or do Nvidia hardcore supporters don't like this kind of situation?
That is a great thing, just don't get your hopes up. If you really believe this ridiculous pre-aplha AMD sponsored PR benchmark is remotely representative you will be very dispaointed. because even if it is true, then it will simply mean developers wont bother with DX12. DX12 is much harder to program for, and if 82% of the market get a slow down despite all the extra resources put in, then developers will simply ignore DX12. They have to write a DX11 path anyway for window 8 users and older hardware, especially AMD who have less backwards support for DX12.Even if AMD matches Nvidia in DX12 that is a big plus. It gives consumers more choice and leads to better competition. Isn't that a good thing or do Nvidia hardcore supporters don't like this kind of situation?
That is a great thing, just don't get your hopes up. If you really believe this ridiculous pre-aplha AMD sponsored PR benchmark is remotely representative you will be very dispaointed. because even if it is true, then it will simply mean developers wont bother with DX12. DX12 is much harder to program for, and if 82% of the market get a slow down despite all the extra resources put in, then developers will simply ignore DX12. They have to write a DX11 path anyway for window 8 users and older hardware, especially AMD who have less backwards support for DX12.
Alternatively, the benchmark results are simply not representative and as new benchmarks comes out we will see Nvidia and AMD both make very similar gains. AMD might well do a little better relative to Nvidia at lower resolutions and close the gap which is a good thing, but the AMD Fanboys wet dream simply doesn't exist in reality.
Nvidia have demonstrated good DX12 performance gains in Forza, CryEngine, Unreal 4, Legends of Fable, Unity and others not publicly mentioned.
What is very clear with DX12 is with less work done by the driver there is a much higher importance on eh game engine having architecture specific optimizations and so game performance will very much depend on how much resources AMD and Nvidia provide game developers in making optimized codepaths. We know historically how this has compared.
Because logic dictates we assume the same situation will exist for all DX12 games using our highly intensive sample of one
Funny how if Nvidia have a leg up on AMD in any way shape or form even if it's tiny then it's full steam ahead with the applause but if AMD do then it's completely negative, I'll never understand that about this forum.
I wouldn't be reading too much into the results just yet as this is an unreleased game, that doesn't have proper driver support yet. Results will vary because of this. (i.e. the 180% increase or whatever it was on a 770ti is clearly something wrong with the DX11 implementation (driver or the game).
I wouldn't be reading too much into the results just yet as this is an unreleased game, that doesn't have proper driver support yet. Results will vary because of this. (i.e. the 180% increase or whatever it was on a 770ti is clearly something wrong with the DX11 implementation (driver or the game).
The 180% increase on 780 just proves they are held back at driver level to make the 970 look better.
if your DX12 performance isn't better than the drivers' DX11 performance, you're doing it wrong.
this is effectively best-case (again, within reason) AMD GCN performance vs. worst-case (mostly -- it's probably not intentionally slowing down Nvidia hardware) Nvidia GM20x performance. All you need to do is look at the DX11 performance. That is the bar to clear, on both sides. AMD set the bar very low because they didn't optimize their DX11 drivers much at all. Nvidia set the DX11 bar as high as possible to show where the developers need to start, not where they should finish.
I'm also pretty skeptical of some of the claims and language coming from Oxide. (http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/) "All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months." Sure, but it's an AMD game and so AMD is actively working with the devs while Nvidia isn't. "Some optimizations that the drivers are doing in DX11 just aren’t working in DX12 yet." Or, Nvidia has better DX11 driver optimizations than we have DX12 code optimizations. "This in no way affects the validity of a DX12 to DX12 test, as the same exact workload gets sent to everyone’s GPUs." Um... see above: you're running AMD-tuned code on Nvidia hardware, and then saying this doesn't affect the validity? I call bunk.
In other words, I would be extremely hesitant about making blanket statements regarding what DX12 will and won't do for various GPU architectures based on a single game from a developer that is actively receiving help from only one of the GPU vendors. If we were looking at a game with an Nvidia TWIMTBP logo and Nvidia was doing great while AMD was struggling, I'd be saying the exact same thing. Looking at high level descriptions of the hardware and theoretical GFLOPS and using that to back up the current performance is silly, because the current performance is already skewed. Why is AMD performing better on a game with an AMD logo that isn't even in public beta yet ? (And remember that the beta stage is when a lot of optimizations take place!) Because if it was anything else, we would be really dismayed.
Why isn't Oxide actively monitoring the performance of their shaders on all GPUs? Why did Nvidia have to do the work? Oxide is the developer, and they should be largely held accountable for their performance.
As for AMD's optimized shader code, the only requirement is that it not perform worse on Nvidia hardware than the original Oxide shader code. But it seems like the level of optimizations Oxide has made without help from AMD may not be all that great to begin with. And parts of the engine can and will change, up to and beyond the time when the game ships.
It feels like more than anything, this was Oxide yelling "FIRST!!11!!" and posting a "real-world DX12 gaming benchmark". But like any and all gaming benchmarks, the only thing the benchmark truly shows is how fast this particular game -- at this particular point in time -- runs on the current hardware and drivers.
Ashes is looking more interesting as a way to see what type of CPU is the recommended minimum than as a way of evaluating the AMD and Nvidia GPUs against each other. Hell, the instructions for the benchmark even recommended testing it on AMD R9 Fury X, 390, 380, and 370... but on the Nvidia side, only the 980 Ti is recommended. They know already that their current code is so badly optimized on Nvidia hardware that they only want the press to look at the fastest Nvidia GPUs.