Movement and sound are intrinsically linked in music creation, a connection often lost in digital sound generation. However, wearable music seeks to merge digital creation with human movement. Jacques Attali's book "Noise: The Political Economy of Music" predicted an era of democratized music creation, which is now possible with real-time signal processing and interpreted programming environments. However, the language of electronic music often implies control rather than creative expression. The concept of wearable music encourages playful interactions and a connection to the body and movement. It allows for the creation of collectively playable instruments that respond to group dynamics. Wearable music can be adapted to a broad spectrum of embodied experiences and sensorimotor diversity, challenging the screen-based interaction paradigm. Even though the music is produced electronically, it can express itself through the body's connection, integrating dance and motion into the music.
In the late 1970s, French economic minister and theorist Jacques Attali wrote a fascinating book called “Noise: The Political Economy of Music.” This book announced an era of “composition” that democratizes musical creation. In this era, people no longer consume music but create music for their immediate enjoyment. People start using old instruments in new ways, or making new, handmade ones themselves without predefined uses. Attali says that this amounts to “a reappearance of very ancient forms of production.”
Now if we fast-forward to our era of real-time signal processing and interpreted programming environments, this statement about “ancient forms of production” becomes even more interesting, because it is the computer that makes you into this ancient luthier, this magician who can bend and flex the wooden body of the violin using code.
But Attali’s sentiment about the democratization of music still has a long way to go before it’s accurate. For one thing, we still conceptualize electronic music with these very combative metaphors and gendered tropes. For instance, think about the words “command” and “trigger.” They are the bread and butter vocabulary of computer music, and typically new musical interfaces or instruments are called “controllers,” because you’re controlling the computer in a new way. That implies a lot about how we think about this and how we dream about it. There’s a book by Tara Rodgers called “Pink Noises” that advocates for dismantling discourses of control and putting the poetics of wonder and awe in its place.
I’ve been working on a project for several years that I call “wearable music,” which is a nice pun that suggests playful interactions rather than mastery or control as well as a relationship to the body and to movement. You can do amazing things with wearable music, like turn the whole body into an instrument, or build collectively playable instruments that respond to the movement dynamics of groups of people and things. When you do this you’re really sidestepping the controller paradigm, because you’re short-circuiting the intentions of the individuals. You’re making more ecological instruments and less egocentric, and potentially less anthropocentric ones.
A genuinely expressive instrument must be capable of sounding bad, but there’s also a place for instruments that turn the most ordinary and basic gestures into rich and beautiful music right off the bat. I’m part of a project supported by the National Science Foundation in the United States that’s exploring wearable music in the context of remote learning and with disability or neurodiversity, in particular autism . With wearable music, we can convey that the body is the producer of a melody and a voice, and the design plasticity of wearables lets you adapt them to a broader spectrum of embodied experiences and sensorimotor diversity. In this way, you start to work your way out of the cognitivist, isolationist, and linear assumptions that are a big part of the screen-based interaction paradigm.