This chapter does NOT appear in the book.
In this chapter, I get a chance to go outside, without leaving the safety of my cubicle.
I use Kinect to identify the background of my work area, and replace it with a picture of a park. This might remind you of blue screen compositing, a standard element of TV weather forecasting, but I'm not using that color-based approach. Instead I'm utilizing the Kinect's ability to identify body shapes to decide which pixels to make transparent. Bodies are drawn unchanged, but all the other pixels in the image are made transparent, allowing a static background image (e.g. of a park) to be seen.
The process involves three OpenNI generator nodes – one for the camera image, another for the depth map, and a user generator node. The user generator returns a labeled user map, where each pixel holds a user ID (1, 2, etc.), or 0 to mean it's part of the background. My code uses '0' positions in the user map to set the corresponding pixel positions in the image map to be transparent. The main steps, which are explained in more detail below, are summarized in the diagram below.