Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
3 Billion - Kaap will be getting 4New Volta pricing confirmed![]()
They're just taking organs directly now to bypass the black market.New Volta pricing confirmed![]()
Tesla V100 is here! The sheer size of it! 812mm2!
Hence why it wasn't possible before 12FF and really envisioned for 10nm. Architecturally it is very different to the P100.
First Volta Q3 then Q4.....
Wonder how many will be cancelling their pre-ordered Vega' for this, when it arrives before they get their cards![]()
Wonder how many will be cancelling their pre-ordered Vega' for this, when it arrives before they get their cards![]()
Biggest update since Fermi really. The new tensor operation looks to do 2xFP16 MUL and FP32 add at insane rates to give the 120TFLOPs for appropriate mixed precision matrix multiplication, perfect for deep learning.Hence why it wasn't possible before 12FF and really envisioned for 10nm. Architecturally it is very different to the P100.
Most because of those new Tesnor TFLOP tech added, purely for Deep Learning and A.I.
Looking at FP32 at 15 TFLOPS, it's not the jump I was expecting compared to Vega 10's 12.5 TFLOPS on the MI25 Instinct at the moment.
I can't imagine how expensive and difficult it must be to produce that behemoth die.
Also It's due Q4, and early 2018 for the none Rack DGX-1
GV100 isn't designed to push the boundaries of Fp32 performance though, it has to dedicate die area to FP64 which Vega lacks, and there are then lots of other architectural changes such as the mixed-precision tensor ops. Apples to oranges comaprisons really. As Doom112 says, a consumer gaming version of this architecture might be at 20TFPs for example.
My Vega purchase has been killed...POOR VEGA