Lightroom 6 Leak Shows GPU Use for Faster Editing

I must admit I don't use photoshop that much. Maybe 15 images in 300 enter photoshop. Even if you like the SOOC Jpg look though, there is still plenty of things like retouching that photoshop still does way better.
 
I have noticed that the longer the develop module is used, the more sluggish LR gets until you close and re-open it. It isn't even over-using resources either which is the most bizarre thing and I'm not the only one to notice this either.

Yep this happens to me to.
 
I have lens corrections applied on import. I like to see what the image looks like before/as I'm editing. However yes I agree, it kills performance at times.
 
I don't use bad lenses.
F1.4 vignetting makes exposure difficult to judge.
Hard to straiten and crop when vertical and horizontals and not fixed, etc. etc.
 
^^^
I correct vignetting on every single picture regardless of Fstop. Then I add my own vignetting. The idea is produce a consistent vignette across a whole set of pictures.
 
^^^
GPGPU encoding was snake oil in the early days of CUDA (GTX 9800/280/480 etc) and allot of people including me fell for it. I remember Nvidia quoting outlandish performance improvements over CPU encodes, but the end result of such renders were far inferior to CPU encodes. The GPU was cutting corners (doing less work) to achieve such fast times. I'm not sure whether it was early poor coding or what, but the end result sucked.
Since then, CPU performance clock for clock has basically stagnated since the i7 920. GPU performance has made bigger strides in comparison.

Having used Final Cut 10, it seems OpenCL GPU acceleration can make a huge difference when coded and optimised efficiently. Final Cut shows how it could/should be done.
 
Allot tasks are serial which makes parallelism impossible and the benefit's slim in most cases.
Example: GPU '1' needs to calculate task 'A' before GPU '2' can start work on task 'B', by which time GPU '1' might as well just carry on with the next task. GPU '2' is simply redundant.
Where it can be used is for tasks that can be worked on in parallel.
Example: GPU '1' works on the first 50% of a task, while GPU '2' works on the next 50%.
In the example of lightroom, that task would be rendering, however in LR6 you would already have Multiple CPU cores as well as a GPU already at work, so I doubt Adobe would deem it worth it to delay LR6 to add such a feature few users could even use, let alone benefit from. If the performance of LR6 is instantaneous, then there is no point anyway.
 
Take a look at Capture One OpenCL performance.
5K imac with single mobile GPU, thrashing top of the range mac pro dual GPU machine.

http://macperformanceguide.com/iMac5K_2014-CaptureOnePro-raw-to-JPEG.html
I just hope this is indicative of the performance I'll see with lightroom.




Edit.
With synthetic OpenCL benchmarks, even a pair of D300's beat the m295x, so a twin pair of D700's should have no problem doing it, but that not what actually happens in app like capture one.
attachment.php

On a side note, I'm pleasantly surprised at the m295x Open CL performance, it performs like a desktop gtx 980. Guess apple made the right gpu choice.
 
Last edited:
^^^
I'm very surprised. I thought the GPU was underpowered as I kept reading complaints from gamers about Apple not using a 980M. I didn't realise it was strong at openCL.
I'm glad I went with the upgraded GPU now, as I almost didn't bother due to lack of app support and I don't game on it.
Hopefully it should perform well in LR, as 5K is definitely slower to render previews on screen.
Untitled2-1024x788.jpg

Untitled-1024x804.jpg
 
Back
Top Bottom