Photoshop Unblur

Here's another example of what was possible, even in 1999*, which I just found in a newsgroup thread about the Adobe software.

http://www.maxent.co.uk/example_1.htm

This isn't a new idea by any means. What's new is that desktop processors can contemplate doing the sums... albeit slower than we might prefer.

*Apparently the poster says it's actually much older, from about 1980. Not sure what kind of processes would have been involved back then. It probably involved an Apple II and about six months sitting around waiting. Or a man with a slide rule.
 
Last edited:
Amazing what they can do.

I wonder how long it will take the Media to run all their blurry stock photographs of Politicians wandering around with documents tucked under their arms through this process :)
 
Here's another example of what was possible, even in 1999*, which I just found in a newsgroup thread about the Adobe software.

http://www.maxent.co.uk/example_1.htm

This isn't a new idea by any means. What's new is that desktop processors can contemplate doing the sums... albeit slower than we might prefer.

*Apparently the poster says it's actually much older, from about 1980. Not sure what kind of processes would have been involved back then. It probably involved an Apple II and about six months sitting around waiting. Or a man with a slide rule.

The maths and theory behind this have been around since at least the early 70s.

In fact, a lot of the basic technology in modern digital photography has routes in the early 70s first spy satellites etc.
 
Lets think about it in more detail. A blurred picture is simply the same picture superimposed with multiple copies of itself each moved a little bit.

A simple 1d example. Lets suppose we consider just a few pixels in a row.

The perfectly sharp series should be (say) 0 a b c 0

Lets say it's shifted one pixel, the result will be (not normalised) 0 a b+a c+b 0+c 0

Can we get the original back ?

If we know it has shifted by one pixel then we just need to shift the blurred sequence by one and subtract - so yes.

The question is how do we know it has shifted by just one pixel ? I'm guessing that this is the clever bit, deriving the camera movement during the shutter being open with no other information than the blurred image itself.

I'm guessing that they look for shifts that increase edge contrast, in essence they try to 'focus' the image by successive approximation of the shift.

Clever stuff - I'm really keen to see how it works in practice....
 
So surely this would require new cameras to record extra data to incredible precision about the orientation and movement of the camera at point of exposure? And it wouldn't work for objects moving within the frame. Also the fact that many people consider a photo that is simply out of focus to be blurred...
 
im not sure if new hardware is needed, one of them was a mobile phone capture. your brain in most cases can evaluate and compute the shift in an image, generally everything would be moving in one flowing direction. obviously computers dont have the computing power of a brain but i cant see why it would be hard for it to analyze its the pixel deep correction of every pixel that will slow the pc down. As shown it seemed to analyze the motion path with ease. it was the correction that he said that might take time.

what im trying to say is if the software recognises a fixed point that has shifted over a fixed distance in the duration of the shutter speed it can determine direction and velocity of the movement. reverse shift the movement keeping the identified original location of the fixed point on a pixel per pixel basis. you could bring get a better looking image, i am sure there is going to be some reduction in the number of pixels the final resulting image has compared to the original (basically a crop).

I dont suppose it can resove the image shift in a panning shot (not that you want to) or it will be able to fix out of foucs problems either.
 
So surely this would require new cameras to record extra data to incredible precision about the orientation and movement of the camera at point of exposure? And it wouldn't work for objects moving within the frame. Also the fact that many people consider a photo that is simply out of focus to be blurred...

No, that is not really correct.
One of the key points in this software demo is that it automatically extracts the motion trajectory, technically known as the Point Spread Function. This can be done using a Fourier Transform as the basis, with some more complicated math on top. Think abut the simple motion blur caused by panning a camera left to right. What you see in the blur is a set of lines that quite clearly indicate to you the direction of movement. The maths behind working out more complex motions is just a generalised version of this example.

You wouldn't necessarily make subject movement blue go away, but this approach will still work to remove the camera motion blur from the subject blur. In theory I believe it would be possible to apply the same technique to de-blur subject motion. I really don't see why it couldn't. The difficulty is localising the analysis and deconvolution to the subject area, which could trivially be done by hand. There may be a non-linear interaction between camera blur and subject motion blur that might make things harder.


You can also correct for focus error with similar techniques: look here
http://www.maxent.co.uk/example_2.htm
 
im not sure if new hardware is needed, one of them was a mobile phone capture. your brain in most cases can evaluate and compute the shift in an image, generally everything would be moving in one flowing direction. obviously computers dont have the computing power of a brain but i cant see why it would be hard for it to analyze its the pixel deep correction of every pixel that will slow the pc down. As shown it seemed to analyze the motion path with ease. it was the correction that he said that might take time.

what im trying to say is if the software recognises a fixed point that has shifted over a fixed distance in the duration of the shutter speed it can determine direction and velocity of the movement. reverse shift the movement keeping the identified original location of the fixed point on a pixel per pixel basis. you could bring get a better looking image, i am sure there is going to be some reduction in the number of pixels the final resulting image has compared to the original (basically a crop).

I dont suppose it can resolve the image shift in a panning shot (not that you want to) or it will be able to fix out of foucs problems either.


"its the pixel deep correction of every pixel that will slow the pc down"

There is no need to do the analysis across all pixels. Basically every pixel will be more or less affected by the same amount of motion blur when subjects are beyond a certain distance. There will be a parallax effect but most difference will be caused by noise of subject motion. There you could sample sparsely the frame, e.g. collect a set of say N boxes (N = 20, 50 100 etc) of say 25*25 pixels (maybe being bigger or smaller helps) and analyze these boxes individually and take an average to reduce noise , therefore you may only sample 10-20% of the total pixels but obtain a robust estimate with measurements taken evenly across the frame.
 
Saying all of this, it still wont change the fact that you are far better off taking a photo with a proper tripod head with mirror lockup and and a remote release.
Failing that, make sure the shutter sped is ample fast enough and use good hand hold technique.

Such software will only be useful for point and shoot people who want to try to rescue failed photos. Perhaps some photo journalists have occasional use of it. Anyone serious about photos will still have to do the right thing at the time of capture if they care about image quality!
 
apparently the portrait image was artificially blurred however the blurring was made by using data of camera shake of another image.
The other two the crowd shot and the poster shot have not been tampered with before hand.

D.P. is right though you its school boy errors that cause this kind of blur, wrong settings etc and shouldnt be a problem with someone with experience. However we are all humand and make mistakes, so could be useful for pros as well.
 
The pro, being a pro, would probably have taken more than one picture and have a similar non-shakey shot to consider, surely?
 
The pro, being a pro, would probably have taken more than one picture and have a similar non-shakey shot to consider, surely?

true its the way i work when you have the chance but some moments happen to fast to rattle 2 or 3 shots off!
 
Back
Top Bottom