I was just wondering if anyone has used the new Tensor cores in the 2080Ti (or other cards in the 2000 series) to accelerate any machine learning tasks?
If so what language did you do it in and was there a noticeable speedup over running the same task without the Tensor cores?
I probably won't get one of the 2000 series cards, but when Nvidia launch the next series of cards, assuming they also have Tensor cores, I'll probably upgrade then.
If so what language did you do it in and was there a noticeable speedup over running the same task without the Tensor cores?
I probably won't get one of the 2000 series cards, but when Nvidia launch the next series of cards, assuming they also have Tensor cores, I'll probably upgrade then.