ASU Learning Sparks
The Future of Music: Human-Computer Interaction in Digital Music
Traditional musicians are deeply immersed in their craft, often playing without conscious thought. This raises the question: can digital music creation ever achieve a similar level of embodiment? The field of human-computer interaction (HCI) has long been dominated by a cognitivist perspective, but new musical instruments are pushing towards an enactive approach. The interaction with a computer is often limited and normative, unlike the rich, expressive interaction with a musical instrument. By observing acoustic instruments, we can gain insights for HCI design. Acoustic instruments are parametrically dense, offering a rich and complex expressive potential. The dream of eliminating the interface overlooks the fact that good musical instruments resist you, offering physical and conceptual pushback. By considering the computer as an instrument, we can develop more expressive and tangible ways of interacting with it. We still have a long way to go, but one day we may make digital music with the same level of physical expression as traditional music.
What can acoustic instruments teach us about how to create digital instruments, and what can that process teach us about the interaction between bodies and computers?
Now, I say bodies, and not brains, because for a long time research into human-computer interaction, or HCI, was dominated by a cognitivist perspective, the idea that making sense of the world is a calculative, representational, linear procedure, rather than a holistic, relational, and embodied experience.
But research in new musical instruments has helped to push scholars in cognition and perception towards a better understanding of the continuous feedback loops between action and perception. This has been called the “enactive” approach to cognition.
When using a computer, you’re basically an eyeball and a couple of fingers. The mouse plus keyboard plus screen paradigm is very constricted, and it’s also very normative: it makes a lot of assumptions about an idealized, neurotypical user.
Think about the difference between interacting with a computer and playing a musical instrument. Even if you’ve never heard the term, you’re probably familiar with the graphical interface paradigm called WIMP, or “window, icon, menu, pointer.” You’re confronted with choices you have to make, and you click your way through the options on the screen. Compare this to a violinist’s virtuosic performance, representing an outstanding degree of gestural skill, nuance, and expression. It’s a very rich substrate for the study and design of HCI, because it’s all about the moment-to-moment auditory and tangible feedback, and the simultaneous control of many parameters.
When we observe someone playing an acoustic instrument, we can learn a lot of new information with implications for HCI design. Acoustic instruments are parametrically dense, meaning that if you change one thing—say the pressure of the bow on the strings—you get multiple degrees of variation in the response. Things aren’t just mapped one-to-one—”I do X, then Y happens.” It’s much harder to articulate than that, and precisely that richness and complexity are marks of an instrument with great expressive potential. Of course, this also implies that an expressive instrument needs to be capable of sounding bad.
The violin doesn’t make decisions about your gestures, it doesn’t classify them, it just responds in a nuanced and sensitive way. Acoustic instruments invite us to philosophize about what the implications might be for the cybernetic scenario we often imagine of a computer coupling directly to your brain to funnel out a piece of music right as you think it, as if the material world were somehow an obstacle to creativity rather than its bedrock.
This dream of eliminating the interface misses the phenomenological observation that good musical instruments resist you at every turn: they physically push back against you—the violin strings and bow hair are taut—but they are also conceptually recalcitrant: it is difficult or impossible to codify all of the things that happen in response to fine changes in pressure and contact. If we start to think of the computer as an instrument, then we can come up with much more expressive and tangible ways of relating to it.