Cambridge Arts Festival 2013: The Well

Recently Lush had the opportunity to develop a series of interactive exhibits for the Cambridge Arts Festival in Cambridge, Ontario. As well as local art and crafts and a big music and dance lineup, the event has taken on a DIY and “Maker” bent in recent years, and we jumped at the chance to take part.

One of our exhibits we called the Well. Written in Processing 2, the Well uses a Microsoft Kinect motion controller to project a 3D image of a visitor, allowing them to push and pull a virtual surface simply by moving around in front of the screen. The visuals are accompanied by synthesized sound under MIDI control.

The exhibit grew out of a 3D model of a vibrating drumhead, itself an extension of an earlier simulation we did of string motion in musical instruments. (Check out an HTML Canvas version of the drumhead sim.) The nodes in the Well model are represented by a square grid, simply ignoring those nodes that fall outside the diameter of the disc.

Interaction

Once we chose the Kinect as a method of interaction, the biggest challenge was to integrate its depth data into the simulation.

Next, we had to tame the input to prevent the often-drastic changes in depth readings from throwing the surface into chaos. To do this, we damped down the motion of the nodes, put an upper bound on their velocity, and added in a factor to make the surface “adhere” to the user.

Sound

To further enliven the experience we wanted to add generated sound. Synthesizing audio on the fly in Processing proved too slow, and simple playback of pre-generated files wasn’t suited to creating the constant, continuously evolving sound we were after. Instead, we investigated MIDI control, allowing Processing to control external synths.

We chose the REAPER digital audio workstation due to its low cost and low processor overhead, running a freely available analog modelling synth plugin called TyrellN6, and created custom patches in conjunction with reverb and comb filtering to generate a “wind tunnel” sound.

Finally, to animate the exhibit during idle periods, we also introduced a subtle “water drip” effect that perturbs a random point on the surface from time to time. An echoing drip sound, created using filtered noise and reverb, helps to sell the visual.

How it went

Kids gravitated to the Well immediately — it helped that we positioned it close to the floor so even our youngest visitors could take a turn. It was popular enough that later in the day we added a second version of the installation at a more comfortable height for adults. Despite this station’s lack of speakers it was also well received, though we heard a few shouts of, “Check this one out, it’s got sound!” as kids raced on to the floor-mounted version.

Challenges

The inefficiency of Processing code turned out to be a major bottleneck. As a result, the simulation had to be run at a lower resolution than we had hoped, both in terms of screen size (720p) and the detail of the membrane. This version of the Well is 85 nodes in diameter, using only a fraction of the data available from the Kinect. To improve the efficiency of the membrane model, we tried an old trick used in programming cellular automata, using a one-dimensional array rather than the more straightforward 2D array. This required rewriting a good deal of the model’s code, but unfortunately didn’t up our frame rates by much.

For our next project of this type we’re planning to go native, most likely using C++ with a library such as Cinder or OpenFrameworks.

We’ll have more about our installation projects in the weeks to come. Many thanks to the Cambridge Arts Festival volunteers who helped us with setup and teardown, and to CAF’s organizer, the indefatigable Gareth Carr, for inviting us to take part.