Discussion in 'Graphics Cards' started by Kaapstad, May 11, 2017.
Future Memory Developments.
I thoughtI would start a news thread for big Volta as it could get quite interesting if I regularly keep the OPs updated.
If people would like to discuss big Volta and post items that could be used in the OP that would be very useful.
Well, so much for the GTX 2080 rumours...
Hopefully consumer cards will be available for Christmas. Along with that high refresh 4K monitor.
Tensor cores for mat-mat multiplication? 120TF/s throughput?! If it's practical to extract even a small fraction of this potential, then there are going to be a lot of people in the scientific computing field buying a V100.
I'm working on dense linear algebra right now, so I'm going to need to get my hands on one ASAP. Just need to figure out where best to go begging for the funds
I'm sure these won't make it to the G-Force range (minimal utility for gaming), but for a lot of HPC applications this could be a game-changer.
Wake me up when there's some GeForce news.
Good luck with raising $3Bn gonna be a few months before they are generally available to industry (and at normal industry prices) unless you have deep pockets.
Probably. But it depends
We had access to the new Xeon Phi (Knight's Landing) a good 9 months before they were generally available. Big boss man here has pretty close ties with Intel, as some of the work we do is related to optimising the Intel Math Kernal Library. Not sure if he can get anything from Nvidia, but I've seen stranger things happen.
More realistically, in academia, you mentally add on about a 9 to 18 month delay between trying to get some funding, and actually having it to spend. So if I start searching now, I *might* be able to buy one before it's superseded by V110
Would you care to explain this for numpties like me?
For particular types of matrix multiplication and accumulation that can leverage mixed precision floating point, the GV100 is stupidly fast. 10x faster than pascal. The main application will be in deep learning, the GV100 is basically accelerating neural network updates and propagation. This is why Nvidia have specially supported Google's Tensor Flow library.
There is a reasoning companies will be paying $18K per single chip for this beast.
That's why I can't understand why all the Gamers were getting excited about it, We're never going to see a GTX Gaming card with anything close to this, Are we?
No but the tech advancement will likely bring other advantages with it that can be applied to GeForce and likewise the potential stats when stripping out the other stuff gives room for a lot of potential.
Separate names with a comma.