The number of times he repeated/started to repeat himself caught my attention - something clearly not right but might be personal not business.
I thought the whole thing was shoddy and amateurish.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
The number of times he repeated/started to repeat himself caught my attention - something clearly not right but might be personal not business.
Someone on AT forums made this observation:
GCN and Pascal might be more a like than we think!!
FP32 isn't that bad - sure the hardware units hasn't gone up much but the clock speeds are a fair jump - the actual GFLOP increase isn't to be sniffed at.
The CUDA core per SM is an interesting change for those who've been keeping up
Shoots down those people claiming vehemently that Pascal was just a shrunk Maxwell and would have "no" Async heh. Those changes definitely aren't by mistake.
Shoots down those people claiming vehemently that Pascal was just a shrunk Maxwell and would have "no" Async heh. Those changes definitely aren't by mistake.
Technically we should be comparing it to the Kepler based GK110 and GK210 cards and not the GM200 since Maxwell had most of DP compute stripped out.
I think it indicates that Nvidia are quite worried about the Intel compute cards,whch are slowly creeping into the supercomputer market.
Keep in mind that 580gtx was 1.6TF or so and Titan was 4.5TF, Titan X was what 6.1TF and GP100 appears to be 10.5TF was it and with an increase in power to 300W. So 40 to 28nm was a nearly 3 fold increase in FP32 performance, from Titan X to GP100 is a significantly, dramatically smaller increase. If it had the same increase as last process node change it would have about 18TF of performance, instead it has 10.5TF, I think it's fair enough to say that is pretty tame.
The clock speed jump only makes that worse, it's 300W at very high clocks, at 250W with lower clocks it will be even less, and that listed performance is at full boost clocks not base clocks.
because what I'm hearing is; Nvidia is worried about Polaris full stack coming out for back to school; and getting a lot of contracts over them; which I can easily see happening.
So did the Fiji GPUs.
For those who missed it live (like myself), you can watch it here. Jen seems to like Deep-learning. It's like NV's equivalent of Asynchronous compute.
I cba watching all that babbiling on he keeps doing and where he keeps keeps repeating the same, repeating the same thing over and over again.
Is there a news website that has taking all the important things from the video and condensed it down yet?
Sure, i can do that;
Driver-less Cars, powerful Work Station Cards, Cars that drive themselves, Autonomous Cars, Compute Cards, Multi Lightprobe VR, learning, the Cars of Jeremy Clarkson nightmares, GPU's that go to School and Cars that don't need You.
What the hell has that got to do with the topic being discussed? It is not even remotely a green vs red thing going on here so don't turn it into one.