Lightroom 6 Leak Shows GPU Use for Faster Editing

I don't use bad lenses.
F1.4 vignetting makes exposure difficult to judge.
Hard to straiten and crop when vertical and horizontals and not fixed, etc. etc.
 
I don't use bad lenses.
F1.4 vignetting makes exposure difficult to judge.
Hard to straiten and crop when vertical and horizontals and not fixed, etc. etc.

Sure if you are going to correct vignetting anyway then you will want the previews corrected to judge exposure. My point was I don't feel the need to correct vignetting on a portrait photo most of the time because if anything the vignetting is a nice effect. So I can judge exposure without correcting vignetting because I wont correct vignetting in export.

Shooting the ultra fast primes might make a big difference, I don't see enough vignetting in my f/2.8 zooms to cause concern for portraits. For landscapes and architecture yes absolutely.


Similar with distortion, most people photos don't have enough architectural detail to worry about slight distortion. The obvious exceptions can just be dealt with on an individual basis rather than applying them as a preset on import. At least that is true in my experience.
 
^^^
I correct vignetting on every single picture regardless of Fstop. Then I add my own vignetting. The idea is produce a consistent vignette across a whole set of pictures.
 
Fair enough. Te only time I will correct vignetting and then add any in is if I have t crop to improve composition.
 
I don't correct it, I shoot with it in mind and that seems to work in my favour based on how I shoot and how I process. The end result is all that matters, how you get there is tit for tat really :p
 
Is this going to be like other GPU acceleration? Namely the encoding field where the results obtained through a CPU encode are much better than results obtained through a GPU encode?
 
^^^
GPGPU encoding was snake oil in the early days of CUDA (GTX 9800/280/480 etc) and allot of people including me fell for it. I remember Nvidia quoting outlandish performance improvements over CPU encodes, but the end result of such renders were far inferior to CPU encodes. The GPU was cutting corners (doing less work) to achieve such fast times. I'm not sure whether it was early poor coding or what, but the end result sucked.
Since then, CPU performance clock for clock has basically stagnated since the i7 920. GPU performance has made bigger strides in comparison.

Having used Final Cut 10, it seems OpenCL GPU acceleration can make a huge difference when coded and optimised efficiently. Final Cut shows how it could/should be done.
 
Allot tasks are serial which makes parallelism impossible and the benefit's slim in most cases.
Example: GPU '1' needs to calculate task 'A' before GPU '2' can start work on task 'B', by which time GPU '1' might as well just carry on with the next task. GPU '2' is simply redundant.
Where it can be used is for tasks that can be worked on in parallel.
Example: GPU '1' works on the first 50% of a task, while GPU '2' works on the next 50%.
In the example of lightroom, that task would be rendering, however in LR6 you would already have Multiple CPU cores as well as a GPU already at work, so I doubt Adobe would deem it worth it to delay LR6 to add such a feature few users could even use, let alone benefit from. If the performance of LR6 is instantaneous, then there is no point anyway.
 
Allot tasks are serial which makes parallelism impossible and the benefit's slim in most cases.
Example: GPU '1' needs to calculate task 'A' before GPU '2' can start work on task 'B', by which time GPU '1' might as well just carry on with the next task. GPU '2' is simply redundant.
Where it can be used is for tasks that can be worked on in parallel.
Example: GPU '1' works on the first 50% of a task, while GPU '2' works on the next 50%.
In the example of lightroom, that task would be rendering, however in LR6 you would already have Multiple CPU cores as well as a GPU already at work, so I doubt Adobe would deem it worth it to delay LR6 to add such a feature few users could even use, let alone benefit from. If the performance of LR6 is instantaneous, then there is no point anyway.


GPU compute only works for distinctly parallel tasks anyway and so getting mutli-GPU trivial, in fact depending on the API it is totally transparent to the developer.
The reason GPU can be used is because image processing is distinctly parallel. E.g if you want to change the exposure or white balance the image can be split into small tiles and each tile processed in prallel by the different cores of the CPU or different streams of the GPU or indeed different GPUs.
 
Last edited:
Take a look at Capture One OpenCL performance.
5K imac with single mobile GPU, thrashing top of the range mac pro dual GPU machine.

http://macperformanceguide.com/iMac5K_2014-CaptureOnePro-raw-to-JPEG.html
I just hope this is indicative of the performance I'll see with lightroom.




Edit.
With synthetic OpenCL benchmarks, even a pair of D300's beat the m295x, so a twin pair of D700's should have no problem doing it, but that not what actually happens in app like capture one.
attachment.php

On a side note, I'm pleasantly surprised at the m295x Open CL performance, it performs like a desktop gtx 980. Guess apple made the right gpu choice.
 
Last edited:
^^^
I'm very surprised. I thought the GPU was underpowered as I kept reading complaints from gamers about Apple not using a 980M. I didn't realise it was strong at openCL.
I'm glad I went with the upgraded GPU now, as I almost didn't bother due to lack of app support and I don't game on it.
Hopefully it should perform well in LR, as 5K is definitely slower to render previews on screen.
Untitled2-1024x788.jpg

Untitled-1024x804.jpg
 
My 970 gives me 2704 on that LuxMark 2 with the default Sala scene, not sure how good/bad/average that is for compute performance as I've never really used it that way before.
 
Back
Top Bottom