Python camera movement detection...

Associate
Joined
8 Dec 2010
Posts
44
Hm.. I'm thinking of starting a project in Python a little something like this: Sentry Gun

But it will take input from a camera, (Possibly threshold it) and then the frame after (Threshold again) then compare the two frames (Which is why possible threshold for easier comparison) and if there is a major difference, I.E someone is now in-front of the camera, it will do something.

Not too sure how I'm going to do this.... Any idea?
 
Without meaning to sound like a ****, learn how to do your own research, specifically - how to use Google:

http://negativeacknowledge.com/2010/06/automated-nerf-turret/

It's all very well asking specific questions on forums if you're having a problem and can't find any material about it, but questions like "Not too sure how I'm going to do this.... Any idea?" just strike me as lazy.

Here endeth the lecture :p
 
Without meaning to sound like a ****, learn how to do your own research, specifically - how to use Google:

http://negativeacknowledge.com/2010/06/automated-nerf-turret/

It's all very well asking specific questions on forums if you're having a problem and can't find any material about it, but questions like "Not too sure how I'm going to do this.... Any idea?" just strike me as lazy.

Here endeth the lecture :p

I've started doing a load this morning. Got the camera to take images and save them, just need to find a way to do a comparison on both. But thanks anyway, I was mostly asking if people had any ideas that would make this easier or whatever.
 
At Uni we did a group project with motion detection. We ended up writing a very simple algorithm which basically split the image up into a grid, g.g. 10x10 (the larger the grid, the more accurate it was) and the "motion detection" relied upon checking the pixel colour difference between the grid point on the the live frame and the previous frames, if it was over a particular threshold it would be flagged as having motion, and if there were over y pixels of the grid marked as motion it would start recording.

Worked fairly well actually.
 
Ok, I just set up a script that will check the two histrograms and look for a large difference.
mvoement.png


Thanks for all the tips, I'll probably have the image split up into 4 and check each one to be more specific on where the movement is coming from.
 
Thanks for all the tips, I'll probably have the image split up into 4 and check each one to be more specific on where the movement is coming from.


Going a bit further, why not be able to adjust the sensitivity? So say if you increase the sensitivity, your grid becomes smaller... maybe a 4x4...all the way up to say 1000x1000 - then you can adjust the percentage of grid areas required to show changes.

I think by being rigid with your grid sizes you may come up against problems with subtle changes.

Further, you may want to say, only look for changes in certain areas of the image - so look only at a pathway, not the trees and shrubbery around it - so you may want the ability to flag only certain areas of theimage.
 
Last edited:
Going a bit further, why not be able to adjust the sensitivity? So say if you increase the sensitivity, your grid becomes smaller... maybe a 4x4...all the way up to say 1000x1000 - then you can adjust the percentage of grid areas required to show changes.

I think by being rigid with your grid sizes you may come up against problems with subtle changes.

Further, you may want to say, only look for changes in certain areas of the image - so look only at a pathway, not the trees and shrubbery around it - so you may want the ability to flag only certain areas of theimage.

Hm.. well there is only 76800 pixels(320x240) but if I can split the image up by 4x4 I could easily make it larger and more precise. I've also look and some servo controllers and some other things, that should be very easy to use once this part is finished.
 
I was actually going to write something similar in Java, just haven't had time to do it this week.

I studied a bit of computer vision while I was at uni a couple of years back and plan to make use of an algorithm described then.

Two problems with differencing between current frame and previous frame is that it won't detect if someone has suddenly stopped in the frame, thus they become invisible. Secondly, if the object is moving and has a similar colour to the background, they won't be detected (depends on your thresholds of course).

Also as someone mentioned, you want be able to avoid detecting unwanted background motion as motion.

To get round the unwanted motion and sudden stopping of objects disappearing, you might want to incorporate a learning phase each time your application starts up. The program will learn how much a pixel changes in colour over several frames, and bases the standard deviation on that, i.e. how much change of the value constitutes motion or not. I should also mention that it might make things easier to work in grayscale and not full colour :) This would help avoid detecting regular movement such as swaying bushes from being detected as motion.

That is only one approach, and you can build on it further (can't remember right now!) and there is a similar technique that doesn't involve a learning phase.

If only I still had my notes from university :(
 
Ok, I switched it to greyscale and set it up to split into a 4x4 grid (Can easily be made larger, but for now it's 4x4) then when there is motion in one of the 4 squares I want it to be filled red, so when the grid is say 32x32 it will show whatever is moving covered in red pretty well. I.E:
image1pco.jpg

Anyone know the best way to add a colour (Red) onto a greyscale image with python, had a look around but can't find anything to work.

Also once I figure out how to do add the red it will be pretty simple and will just take me a little while to put it all together and it should work quite well.
 
You can process in grayscale, but you don't necessarily need to display it in grayscale as well. You can just take a copy of the captured frame, and work with that, and using the result of the processing you can alter the colour image that needs to be displayed.

How have you converted the image to grayscale, and what sort of object is the grayscale object? i.e. is it still an RGB image or is it an actual grayscale image object?
 
You can process in grayscale, but you don't necessarily need to display it in grayscale as well. You can just take a copy of the captured frame, and work with that, and using the result of the processing you can alter the colour image that needs to be displayed.

How have you converted the image to grayscale, and what sort of object is the grayscale object? i.e. is it still an RGB image or is it an actual grayscale image object?
Using PIL (Python Image Library).

Code:
img = Image.open('image1.jpg')
img = ImageOps.grayscale(img)
img.save('image1.jpg', 'JPEG')

That opens the image converts it to a grayscale one, then saves it. So I'm guessing it is actual grayscale not RGB.


Quick Update:

It can now detect move in the top left corner. Soon to be top right/bottom left/ bottom right. Then I can hopefuly divide it up some more.


73344782.png
 
Last edited:
Just installed Python to give this image conversion stuff a bash :)

Once you've got your grayscale image, you can convert it back into RGB colourspace (but still remain gray) with:
Code:
rgb_img = img.convert("RGB")

If you display a value of a pixel, you will get a tuple of three values which are all the same indicating it is gray:
Code:
img.getpixel((1,1))
(81, 81, 81)
Now if you want to shade an area of the image red, all you do is take the value of the pixel and set it to the same value in all channels but with the red channel set to something like 200.

Alternatively, you could try modifying a second image of the same dimension and add your red regions, then blend the two images together using blend method, which might get a nicer blended red effect with the background.
 
Just installed Python to give this image conversion stuff a bash :)

Once you've got your grayscale image, you can convert it back into RGB colourspace (but still remain gray) with:
Code:
rgb_img = img.convert("RGB")

If you display a value of a pixel, you will get a tuple of three values which are all the same indicating it is gray:
Code:
img.getpixel((1,1))
(81, 81, 81)
Now if you want to shade an area of the image red, all you do is take the value of the pixel and set it to the same value in all channels but with the red channel set to something like 200.

Alternatively, you could try modifying a second image of the same dimension and add your red regions, then blend the two images together using blend method, which might get a nicer blended red effect with the background.

Thanks, I got a red image and blended it, which then turned it blue, so changed the red image to blue and turned the alpha to about 3. Got this:

blendf.jpg


Seems to work, now just a hell of a lot of testing and I'll see if I can get this working!
 
Back
Top Bottom