Caporegime
To lol, or not to lol, that is the question
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
To lol, or not to lol, that is the question
AMD really need to start making noises with regards to Vega. Building up some momentum and excitement prior to release.
This will not only increase sales but stop people going Nvidia in the interim.
That is not true at all and shows a big misunderstanding of what async compute is. You are probably confused by the fact that nvidia moved the scheduler from a fixed hardware scheduler in Fermi to a mixed hardware-software scheduler in Maxwell and misunderstood that to mean that Nvidia removed scheduling hardware. But move the scheduler into drivers actually can enable a much greater flexibility and the ability to optimize at much courses level with much more sophisticated optimization. That isn't software emulation or any such nonsense, its a design decision that can improve performance and functionality. Maxwell's async performance suffered for other reasons such as the granularity of the preemption and static compute partitioning. Much of this was rectified in Pascal which is shown in benchmarks:Nvidia do not have the hardware to properly implement async compute. This is not a case of them just 'not wanting' to do something. These architectures are designed many years ahead of time and they cant just go throw async compute-capable engines on their GPU's late in the process.
That is not true at all and shows a big misunderstanding of what async compute is. You are probably confused by the fact that nvidia moved the scheduler from a fixed hardware scheduler in Fermi to a mixed hardware-software scheduler in Maxwell and misunderstood that to mean that Nvidia removed scheduling hardware. But move the scheduler into drivers actually can enable a much greater flexibility and the ability to optimize at much courses level with much more sophisticated optimization. That isn't software emulation or any such nonsense, its a design decision that can improve performance and functionality. Maxwell's async performance suffered for other reasons such as the granularity of the preemption and static compute partitioning. Much of this was rectified in Pascal which is shown in benchmarks:
https://www.pcper.com/reviews/Graph...Looking-DX12-Asynchronous-Compute-Performance
GTX1080 gets a 6.8% performance improvement with async
RX480 gets an 8.5% improvement.
Would be interesting to see result with the latest NVidia drivers and how the 1080ti does but I would be expecting at least a 10% performance improvement from enabling async.
dude, contrary to the rest, and with your priors, you need to put /s at the end of your postNvidia are just killing PC gaming, and all us ******* mugs just letting em, we should all be ashamed
NVIDIA, THE CANCER OF PC GAMING!
It is essentially true. I'm simplifying things a bit for the sake of not writing out a ton on it, but your comments are not really disproving my point. I realize async compute can work on Pascal, but it doesn't have specific hardware support for it. As you say, they've moved the functionality largely to a driver-level function which is not going to be nearly as efficient.That is not true at all and shows a big misunderstanding of what async compute is.
It is essentially true. I'm simplifying things a bit for the sake of not writing out a ton on it, but your comments are not really disproving my point. I realize async compute can work on Pascal, but it doesn't have specific hardware support for it..
Its why nVidia are so bottlenecked on Intel and AMD lower Mhz 6 and 8 core CPU's vs higher Mhz 4 core CPU's. AMD will make use of the extra threads, with nVidia they sit idle, nVidia's A-Sync can't make use of the extra threads.
DX12 is not a good measure of A-Sync given that it doesn't a
nVidia's 'Software Solution' is limited to less threads than AMD's 'Hardware Solution', there is also a CPU overhead cost with nVidia's software A-Sync, its why nVidia's CPU bound performance doesn't scale with more than 4 cores where as AMD's does.
Its why nVidia are so bottlenecked on Intel and AMD lower Mhz 6 and 8 core CPU's vs higher Mhz 4 core CPU's. AMD will make use of the extra threads, with nVidia they sit idle, nVidia's A-Sync can't make use of the extra threads.
Its more complicated than that - and probably why Ryzen struggles - its performance and ability to take advantage of additional cores is bound by tickrate and to some extent memory latency/bandwidth - to take advantage to the same extent of threaded capabilities you need increasingly faster tick over and communication between worker threads and the main scheduler/marshalling thread.
couple new games will answer the question of how close Ryzen is to intel in gaming once and for all, starting with bethesda's Prey, since AMD partenered with them, we are like 2 weeks away from the release.Whatch this from here... https://youtu.be/0tfTZjugDeg?t=10m53s
Despite this with all the micro code patched and so on.... the IPC in games is actually roughly equal to Intel now, just as it always has been in productivity work.
The Ryzen Chips are about 20% lower clocked than the 7700K, with nVidia GPU's that actually shows though, the average review has the 7700K 20% ahead, apart from Tomb Raider which is more like 40%.
AdoredTV picked up on this, it didn't make any sense to him so he investigated it, while the reason for Tomb Raider odd performance is a conundrum.. what is obvious was that all reviewers are using nVidia GPU's, for good reason, but, if you can get enough AMD GPU power, like CF 480's what you find is that with AMD GPU's is the much lower clocked Ryzen chips catch right up with the 7700K, even in Tomb Raider, the reason being AMD's A-Sync Compute makes use of the extra threads on Ryzen while nVidia don't.
With the 295 X2 the 3Ghz Ryzen 1700 is faster than the 4.2Ghz 7700K. again in Tomb Raider.
https://www.youtube.com/watch?v=nLRCK7RfbUg&feature=youtu.be&t=7m47s
Again with AMD GPU (RX 480) 1800X vs 6900K, 1800X is overall faster....
As it should be with a slightly higher IPC than Broadwell-E
nVidia's A-Sync while good in limiting API's like DX11 is its self limiting the more powerful higher thread count CPU's in DX12 and Vulkan.
Developers are making things worse for everyone tbh. The state of the gaming industry in terms of quality of games has noise dived - games are getting worse. Dumbed down AI, X-ray vision in every game, cut scenes where you just press buttons rather using skill....Nvidia are just killing PC gaming, and all us ******* mugs just letting em, we should all be ashamed
NVIDIA, THE CANCER OF PC GAMING!
couple new games will answer the question of how close Ryzen is to intel in gaming once and for all, starting with bethesda's Prey, since AMD partenered with them, we are like 2 weeks away from the release.
i do agree with you half way on what you said, my reply to loadsamoney, was that, tailored to loadsamoney, with a bit of sarcasm.
and i said pretty much what you said on another post, that it's mostly due to hardware and OS migration, but some companies can abuse that, Nvdia for exemple is taking one major feature of DX12, async compute, and they just don't want to implement it, mostly because AMD gains an edge over them, and they do just what you said, they follow up with multiple generations of GPUs, even couple years after the feature was added to the API and some games and consoles already use it, this is really rare, to see a GPU manufacturer drag his feet behind in new API features, usualy they race to be the first to implement them, way before it starts being used.
in 2 year we will get 70-80% DX12 capable, with 20-30% async capable, making is a tough decision for devs to implement, instead of being automatic.
Wait, are people still arguing about whether Pascal supports Async or not? If in doubt, please read this: https://www.reddit.com/r/nvidia/comments/50dqd5/demystifying_asynchronous_compute/
Should help clear things up (tl;dr: Pascal does async just fine).
Pretty sure AMD are only touting async because they don't support any other useful DX12 feature: https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D
lol, Don't you just love how when people title a post stating its an 'understandable simplification of a subject' and then go on to write a 5,000 word academic Essay, he even used an academic word to describe his thread, "Demystifying"
anywho... All that nVidia reddit post does is bamboozle people...
What that article goes on to say in about 2,000 words is that nVidia use Pre-Emption and AMD use Simultaneous Hardware Command Queues.
Now this short video here actually is a layman's term of A-Synchronous Compute, in about 3 minutes it explains both parallel Command Queues and Pre-Emption.
Does Vega do Async? lol