Has anyone used the Tensor cores in 2000 series Nvidia cards for AI programming?

Soldato
Joined
1 Nov 2007
Posts
5,618
Location
England
I was just wondering if anyone has used the new Tensor cores in the 2080Ti (or other cards in the 2000 series) to accelerate any machine learning tasks?

If so what language did you do it in and was there a noticeable speedup over running the same task without the Tensor cores?

I probably won't get one of the 2000 series cards, but when Nvidia launch the next series of cards, assuming they also have Tensor cores, I'll probably upgrade then.
 
Soldato
Joined
23 Feb 2009
Posts
4,978
Location
South Wirral
Its going to need the various libraries updated to take advantage of them surely ? I'm just learning at this area at the moment:mostly via python, pytorch and fast.ai. The cards are that new I doubt anyone has really worked out how to take advantage as yet.

I bought a 1070Ti a couple of weeks ago as it gave me the best price per CUDA core. This did precede the 2060 launch though - but I'm not currently feeling any buyer's remorse as the 1070Ti is 8G memory and the 2060's are 6 Gig.
 
Soldato
OP
Joined
1 Nov 2007
Posts
5,618
Location
England
Its going to need the various libraries updated to take advantage of them surely ? I'm just learning at this area at the moment:mostly via python, pytorch and fast.ai. The cards are that new I doubt anyone has really worked out how to take advantage as yet.

I bought a 1070Ti a couple of weeks ago as it gave me the best price per CUDA core. This did precede the 2060 launch though - but I'm not currently feeling any buyer's remorse as the 1070Ti is 8G memory and the 2060's are 6 Gig.

I would have thought that most libraries already support Tensor cores since they are included in the Nvidia Tesla line of GPGPU accelerators and have been for some time.
 
Soldato
Joined
23 Feb 2009
Posts
4,978
Location
South Wirral
I would have thought that most libraries already support Tensor cores since they are included in the Nvidia Tesla line of GPGPU accelerators and have been for some time.

You could be right, it wasn't hugely obvious when I looked at pytorch docs and that is the main one I use. I was also hesitant to go with the 20xx cards as lots of people in the forums are having to RMA the cards with reliability issues. "Old" but working is good enough for my needs.
 
Soldato
Joined
20 Dec 2004
Posts
15,845
The tensor cores were designed for use with ML frameworks, like Tensorflow (clue is in the name).

Yes they work, yes they are very fast at those workloads.
 
Soldato
OP
Joined
1 Nov 2007
Posts
5,618
Location
England
You could be right, it wasn't hugely obvious when I looked at pytorch docs and that is the main one I use. I was also hesitant to go with the 20xx cards as lots of people in the forums are having to RMA the cards with reliability issues. "Old" but working is good enough for my needs.

Yeah, I agree. I'm sticking to my 1080Ti for the time being but when Nvidia release a new series of cards I'll probably upgrade then so I get the speed boost of two generations in one.

The tensor cores were designed for use with ML frameworks, like Tensorflow (clue is in the name).

Yes they work, yes they are very fast at those workloads.

Cool. Thank you. I'll do some reading into it and see if it is worth me upgrading my GPU to take advantage of it. Although as I said above I'm tempted just to wait until next year when Nvidia release a 2180Ti.
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
As long as you have TensorFlow or whatever set up to use CUDa/GPU then it is automatic, you don't have to do anything or code specially.

I haven't testing Turing but have done stuff with VOLTA on AWS. Very fast as expected.
 
Soldato
OP
Joined
1 Nov 2007
Posts
5,618
Location
England
As long as you have TensorFlow or whatever set up to use CUDa/GPU then it is automatic, you don't have to do anything or code specially.

I haven't testing Turing but have done stuff with VOLTA on AWS. Very fast as expected.

I might have to play around with AWS then. That seems like an easy way to get started. Stupid question but do you know if you can use the AWS Cloud9 IDE for machine learning projects you are going to run on GPU EC2 instances?
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
I might have to play around with AWS then. That seems like an easy way to get started. Stupid question but do you know if you can use the AWS Cloud9 IDE for machine learning projects you are going to run on GPU EC2 instances?


No idea, sorry. I don't go near IDEs
 
Soldato
Joined
23 Feb 2009
Posts
4,978
Location
South Wirral
One thing I saw with AWS is that you don't seem to get GPU instances available on the basic tier, you need to ask to be given access to them. I never bothered in the end as my card arrived before I found out how to request access. I gather it takes a day or 2 for them to authorise it.
 
Soldato
OP
Joined
1 Nov 2007
Posts
5,618
Location
England
One thing I saw with AWS is that you don't seem to get GPU instances available on the basic tier, you need to ask to be given access to them. I never bothered in the end as my card arrived before I found out how to request access. I gather it takes a day or 2 for them to authorise it.

Thanks for the info. I'll have to look into it in more detail later this week. I'm quite excited to play around with this stuff. It has been something I have been interested in for quite some time.

I'll still probably invest in the next-gen Nvidia card so I can do it all on my local machine though.
 
Back
Top Bottom