This chapter does NOT appear in the book.
In the previous two chapters I looked at how to distinguish the user from the background of a Kinect camera image. Chapter 2.3 explained the coding technique (involving imaging, depth data, and user ID generator nodes). The essential idea is to convert each camera frame into a BufferedImage with the background pixels made transparent. A new 'virtual' background is then drawn behind the image. Chapter 2.4 implemented user/scene interaction using the fact that the non-user pixels in the image are invisible.
This chapter focuses on how the user image can be changed without affecting the virtual background. This is surprisingly easy because standard Java image processing techniques, such as blurring and pixel color effects, can be utilized. There are several imaging libraries that offer such effects, and I've used Jerry Huxtable's JH Labs image filters for the examples here.
Library methods usually apply their effects to all the pixels in an image, but since the background pixels in the Kinect image are transparent, changes to those parts will typically remain invisible.
The top of the page shows two screenshots of the KTransformer program, with the "Chrome" effect selected from the menu. The effect is applied to consecutive frames coming from the Kinect camera, so the user remains chrome-plated as he walks around.
KTransformer allows a filter's parameters to be changed at run time. For example, the screenshots below shows the "Dissolve" filter in action – its "density" parameter cycles from 0 to 1 and back again over the course of a few seconds, making the user repeatedly fade away and reappear.