[SoundTrace PIC] JavaArt Chapter 5.   Program Sonification

 

This chapter does not appear in the book.

 

Program sonification (also called auralization) is the transformation of an executing program into auditory information. I'm not talking about an application playing a sound clip, but the entire program becoming the clip or piece of music. The motivation for this unusual transformation is the same as for program visualization - as a way of better understanding what's happening inside code, to aid with its debugging and modification.

Music is inherently structured, hierarchical, and time-based, which suggests that it should be a good representation for structured and hierarchical code, whose execution is also time-based of course. Music offers many benefits as a notation, being both memorable and familiar. Even the simplest melody utilizes numerous attributes, such as sound location, loudness, pitch, sound quality (timbre), duration, rate of change, and ordering. These attributes can be variously matched to code attributes, such as data assignment, iteration and selection, and method calls and returns. Moving beyond a melody into more complex musical forms, lets us match recurring themes, orchestration, and multiple voices to programming ideas such as recursion, code reuse, and concurrency.

A drawback of music is the difficulty of representing quantitative information (e.g. that the integer x has the value 2), although qualitative statements are relatively easy to express (e.g. that the x value is increasing). One solution is lyrics: spoken (or sung) words to convey concrete details.

I'll be implementing program sonification using the tracer ideas discussed in the last two chapters (i.e. employing the Java Platform Debugger Architecture (JPDA), specifically its Java Debug Interface (JDI) API). The resulting system is shown in diagram at the top of this page.

When a method is called in the monitored application, a word is spoken (an abbreviation of the method's name), when Java keywords are encountered in the code, musical notes are played, and when the program starts, ends, and when a method returns, sound clips are heard.

Sound generation is managed by the SoundGen thread, which reads messages from a queue filled by the tracer. The generator utilizes three sound APIs: java.applet.AudioClip for playing clips, the MIDI classes in the Java Sound API for playing notes, and the FreeTTS speech synthesis system (), which is a partial implementation of the Java Speech API 1.0 (JSAPI).

Two short examples of the output from SoundTrace are included in the Downloads section below.

I'll start this chapter by explaining the three sound subsystems for playing clips, notes, and speaking. These are of general use outside of sonification; for example, the Speaker class (implemented with FreeTTS) can be used to pronounce any string, using a range of different voices.

 

Downloads


Navigation:


Dr. Andrew Davison
E-mail: ad@coe.psu.ac.th
Back to my home page