Am i the only one who does not like vr?![]()
I think it'll stay a gimmick until they add more than just head tracking.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Am i the only one who does not like vr?![]()
I think it'll stay a gimmick until they add more than just head tracking.
Then you've misunderstood the graphs... What they show is that NVIDIA has lower latency dealing with smaller sets and only reach the same high level as AMD when dealing with the maximum size AMD can deal with... If you increased the set size again AMD's latency would double where as Nvidias latency would make another small step up
What it shows is that if you optimise your code for AMD's set size then nvidia is equal on latency, but if you optimise for nvidia then AMD will be behind
what do you need? smell, pain sensors?
There are too many different players in all of this to be excited at the moment. Too much segmentation. market needs to settle down, all the players need to kill each other off and one major player stays or two. and games can be created not for 6000 different VR headsets but for just one or twoThen it will become more serious to consider
According to the latest Quantum Physics theories you don't smell anything, you hear it.![]()
damn it, that cat pooped itself again in that box![]()
From them graphs I know what I think looks best, a continue smooth constant line or a line that starts low and gradually gets worse?
That's all am going to say on this matter, because for me the real taking comes when games start dropping..
It may have peed itself but you won't know until you look.
If you do then it means it's twin has pooped itself.
I think I need a drink after thinking about this.![]()
maybe you needed a drink before you thought of this
quantum physics is funlove it as a light read
![]()
Don't forget those Graphs are from a 7970 and a GTX 980TI.
At first the 980TI beats the 7970 but once a lot of threading is involved the performance of the 7970 surpasses the 980TI.
What it also shows that there is a constant latency of about 20ms on the Nvidia GPU in task switching, where as the 7970 is completely parallel.
This is about the 5'th time this has been explained and yet it still keeps cropping up.....
This latency on Nvidia is not good for performance in high threaded tasks but its also not good for VR as that latency results in a delay in Graphics rendering which can give you motion sickness. which is "Catastrophic"
Dozens? Going to be hundreds!
No seriously, where are those games?
EDIT: From reddit.
The problem is: are those going to be DX11 games with a few DX12 features,
Yeah
And even still that is 980ti @31 simultaneous command lists vs FuryX @128
some guy on Beyond3d's forums made a small DX12 benchmark. He wrote some simple code to fill up the graphics and compute queues to judge if GPU architecture could execute them asynchronously.
He generates 128 command queues and 128 command lists to send to the cards, and then executes 1-128 simultaneous command queues sequentially. If running increasing amounts of command queues causes a linear increase in time, this indicates the card doesn't process multiple queues simultaneously (doesn't support Async Shaders).
He then released an updated version with 2 command queues and 128 command lists, many users submitted their results.
On the Maxwell architecture, up to 31 simultaneous command lists (the limit of Maxwell in graphics/compute workload) run at nearly the exact same speed - indicating Async Shader capability. Every 32 lists added would cause increasing render times, indicating the scheduler was being overloaded.
On the GCN architecture, 128 simultaneous command lists ran roughly the same, with very minor increased speeds past 64 command lists (GCN's limit) - indicating Async Shader capability. This shows the strength of AMD's ACE architecture and their scheduler.
Interestingly enough, the GTX 960 ended up having higher compute capability in this homebrew benchmark than both the R9 390x and the Fury X - but only when it was under 31 simultaneous command lists. The 980 TI had double the compute performance of either, yet only below 31 command lists. It performed roughly equal to the Fury X at up to 128 command lists.
It will be interesting to see nvidia owners unhappy since nvidia kinda lied about async shaders, and their software based solution will be horrible and much slower than AMD hardware solutions, especially in latency.
You see what I did there? Next time, when you are trying to be a crystal ball, please use facts and not speculations
what do you need? smell, pain sensors?
There are too many different players in all of this to be excited at the moment. Too much segmentation. market needs to settle down, all the players need to kill each other off and one major player stays or two. and games can be created not for 6000 different VR headsets but for just one or twoThen it will become more serious to consider
Maxwell has 32 Command lists totalling 32 Commands in parallel.
GCN 1.0 has 64 Command lists parallel across 2 ACE Units totalling 128 Commands in parallel.
GCN 1.1 and 1.2 has 64 Command lists parallel across 8 ACE Units totalling 512 Commands in parallel.
Thats the difference
If really pushed a 7970 in render / compute it is at least as fast a GTX 980TI where the latter will make up for its latency through brute force.
If you push a 290/390/X / Fury/X even harder, up to 4 times harder it will stand its ground while all the rest are ground to a halt.
If any DX12 game should ever use 300 or more command lines a 290 will put a GTX 980TI to shame.
Well Thief was the first mantle game title used Async compute to unleashed full power of 290X with 352 commands (44 CU x 8 ACE) but after Nvidia DirectX 11 wonder driver, GTX 780 Ti outperformed 290X easily. A 980 Ti is over twice performance put 290X to shame in Thief's mantle async compute.
I don't think so
Game devs did the absolute minimum with Mantle and all they were concerned about was making the maximum profit.
If game devs are allowed to get away with it they will do the same with DX12, welcome to the world of broken games.
it is not important to nvidia buyers, since nvidia trained them well to upgrade every yearnvidia has broken async shaders in maxwell, and VR hardware is kinda weak? No problem, you just need to upgrade next year to next gen cards which will have everything fixed for you
Yeah, consumer friendly.
But yeah, regarding DX12 adoption rate, it does make sense what you are saying. all the previous DX versions were exclusive to new cards, thus longer adoption rate.