This chapter does not appear in the book.
This chapter explains how to use a webcam to detect change or movement in a scene. Since this is such a common requirement of video-based systems, I'll describe three different approaches. The first is based on the image differencing of consecutive video frames, the second utilizes background/foreground segmentation, and the third employs optical flow.
The image-difference detector highlights any movement between frames with a pair of crosshairs at the center-of-gravity (COG) of the motion. The application, called MotionDetector, is shown in the picture above.
I'll develop the application in two stages. First I'll focus on the detection problem by implementing code using a JavaCV-based test-rig. In the second step, I'll integrate the resulting detection class with the JavaCV grabber code from Chapter VBI-2 that uses a JFrame and threaded JPanel. I'll also add the crosshairs graphic and code to make the tracking follow the user more smoothly.
The second technique, background substitution, utilizes a learning algorithm based on background and foreground clustering using Gaussian distributions. It's a more sophisticated detector of movement than image differencing, and is available in JavaCV as the class BackgroundSubtractorMOG2. The screenshot below shows the applications in action, with the detected foreground shown in white in the right-hand panel.
The large blue circle drawn on the grabbed image in the left-hand panel represents the center-of-gravity (COG) of the foreground region. I'll explain the code in section 4.
The third movement processing technique I'll look at is (sparse) optical flow which matches up 'features' (pixels or small groups of pixels) that occur in consecutive video frames. My OpticalFlowMove application is illustrated in the screenshot below. Each feature movement is shown as a blue arrow, and the overall COG of these direction vectors is denoted by a large red circle.
This optical flow code will be examined in section 5.