Data and music

22 March 2013 - Kev Kirkland

Saw an interesting talk at Pervasive Media Labs about real time sensors. X-io have designed a board which broadcasts data for the ‘music gloves’ used by Imogen Heap (amongst other things, see http://www.x-io.co.uk/).

A chat with Verity afterwards opened up the discussion of non-visual representations of data. In the same way that we can animate data changing over time (e.g. see GapMinder), we can animate sounds over time. Given a lot of people prefer processing audio information, it’s worth looking into. I couldn’t think of many ways that data has been expressed with sound. I guess the classic example would be a geiger counter, where proximity to radiation is mapped to a pitch - highly informative (but not destined to become a best seller).

This might work for detecting correlation. For example you could map the price house in an area to a sound which changes pitch according to how high the price is, and another sound mapped to the average life expectancy (perhaps with a mean sound as well for a reference point). Each area would have a different ‘song’. It would be an interesting experiment to see if listeners pick up any different patterns than those seen in a visualisation. When looking at a picture we have the freedom to jump to any point which looks interesting, but do we miss any other subtle patterns because of this? Would a linear form like sounds/music give us an insight into other patterns?

One danger is that disonance (e.g. when the difference in pitch between mapped data points becomes a flattened 5th) might make the relationship sound ‘bad’ when it could indicate a positive relationship in the data. The representation of sound already has a ‘meaning’ other than the value we’ve given it for the data. It would probably make sense to map the data points to some other kind of filter that’s applied to a constant motif instead to try to mitigate this.