This chapter does NOT appear in the book.
In the previous chapter I showed how it was possible to use the OpenNI generator nodes for imaging, depth data, and user IDs to replace the background of the camera picture. The end result looks a bit like the blue screening used on TV, but the quality is poorer, with jagged edges around the user's outline.
If changing the background was all this Kinect technique offers then there would be little point to preferring it over blue screening, which can be implemented without depth and user ID processing (although you need a colored backcloth). However, the extra information supplied by the Kinect allows me to augment the visuals in interest ways, which I'll discuss in this chapter and the next (chapter 2.5).
This chapter looks at having the user interact with the 'virtual' scene. Often this requires skeletal information from the Kinect (e.g. the position of the user's hands or head). I'll start utilizing skeletons in chapter 4, but there's plenty of interaction forms that don't need that sort of detail.
The KinectSnow application places the user on a country road in a snowstorm (shown in the screenshot at the top of this page).
The falling snow gradually piles up on top of the person, until he moves. As the screenshots below show, the heaped snow briefly retains the outline of the user's old position, and then starts dropping again.