• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Can data mining GPUs be used more widely for processing?

Associate
Joined
5 Sep 2020
Posts
298
I'm aware that some data mining GPUs can be used for machine learning but I'm curious if they could be used more widely e.g. rendering, movie production etc?
 
Something that is Nvidia and has CUDA cores should be usable for a few things, yes, at least regarding science. Problem is that they are not that powerful...
 
Something that is Nvidia and has CUDA cores should be usable for a few things, yes, at least regarding science. Problem is that they are not that powerful...

Reading elsewhere I got the impression that using GPUs in this was was essential. Interesting to hear they are not that powerful. Thanks.
 
I mean, they are not that powerful compared with the GPUs you would really like to use that are, at minimum, the top gaming GPUs of each generation and ideally Quadros or Teslas :).
 
I mean, they are not that powerful compared with the GPUs you would really like to use that are, at minimum, the top gaming GPUs of each generation and ideally Quadros or Teslas :).

ISWYM. May be worth a go if you want to dip you toe in the water of data science on the cheap, but will need £££s for a Quadro say if you get serious.
 
All GPUs these days can be used for general purpose calculations (GPGPU) but whether or not applications run quickly on a GPU depends on how well they can be parallelized. GPUs have hundreds of cores so if you can split workload across them efficiently then you can get very fast calculations out of them. Certain things can be done in parallel very fast and other things simply cannot just due to the properties of what you're doing, so it's hit and miss what accelerates well on a GPU.
 
ISWYM. May be worth a go if you want to dip you toe in the water of data science on the cheap, but will need £££s for a Quadro say if you get serious.

You can definitely use them for learning purposes, yes, that should be no problem. In many apps you may run out of memory, though, it depends on the software. I think that a good and cheap starting point would be a 1080Ti, that already has 11Gb :).
 
Certain things can be done in parallel very fast and other things simply cannot just due to the properties of what you're doing, so it's hit and miss what accelerates well on a GPU.

So, depends very much on the task. Not unexpected I suppose.

You can definitely use them for learning purposes, yes, that should be no problem. In many apps you may run out of memory, though, it depends on the software. I think that a good and cheap starting point would be a 1080Ti, that already has 11Gb :).

I see Overclockers have a B grade 1080ti for £250, way cheaper than I expected (but still too rich for me at the mo). https://www.overclockers.co.uk/b-gr...4mb-gddr5x-pci-express-graphic-bg-01t-zt.html
 
So, depends very much on the task. Not unexpected I suppose.

Precisely. If you imagine a task where you have to iterate some values through a process and the next step of the process needs the value of the previous one, then that task cannot be split up across many cores. It has to run on just one. An individual CUDA core is much slower than a single core in a CPU, so would be much slower at that task.

One such example is video encoding. Most encoders today take a keyframe from the video (a single, full frame snapshot) and then each subsequent frame in the video is the last frame + whatever offset is required to produce the next frame, it's the offsets that are stored to file, not entire frames. And then so on for each subsequent frame until you reach another keyframe. You can't make frame 3 until you have the result of frame 2, because frame 3 is the offset you need to apply to frame 2. This is why GPU acceleration for video encoding isn't that helpful. But if you want to simulate something that has 10 million particles and each particle needs a calculation and that can be done on a separate CUDA core then the GPU will rip right through that no problem.
 
Back
Top Bottom