Gretchen Jude*

Technology does not meet a body. Instead, the matters, flows, forces and intensities of the corporeal link and connect with other flows and the forces and materials of the technology and different bodily and technological multiplicities are elaborated.
(Currier 2003: 331)

The development of sensor technology for performance is necessarily understood to be broadly multi-disciplinary and experimental (as well as inherently collaborative, and thus ontologically complex). Yet, the realm of digital sound provides the earliest examples of artists developing such technology for live performance settings. Musicians continue to lead the way in creative uses of electronics onstage, while, at the same time, blurring the lines between performance genres.

Video 1

RSC actors integrating wearable sensor technology in preparation for 2016 production of The Tempest

In the following, I give an overview of interfaces that have been used in live performance since the 1980s, focusing on experimental uses of digital audio controllers. How do these interfaces imply a move beyond instrumentalization—as something other than a device manipulated by a body to change perceptible parameters? I, then, contrast two ways of analyzing such technology as it is used in stage performance. What are the drawbacks of Donna Haraway’s cyborg trope—the radical apotheosis of the prosthetic model—and how might a Deleuzean assemblage model provide a productive alternative? Finally, engaging a feminist notion of assemblage, I examine a work by Japanese composer Tomomi Adachi. As digital technology promises (or threatens, depending on one’s standpoint) to become increasingly ubiquitous—exemplified by Royal Shakespeare Company’s 2016 mixed reality production of The Tempest, which incorporated cutting-edge motion capture and imaging technology[1]—the necessity for analytic lenses to understand such technology also intensifies, for performing artists and critics alike.

Video 2

“Voice and Infrared Sensor Shirt—ADACHI Tomomi.” Video of live performance with digital controller, 2009

On April 3, 2010, I attended a show at 21 Grand, a now-defunct experimental music venue in Oakland, California, the East Bay of the San Francisco area. Japanese experimental vocalist, composer, sound poet and instrument builder, Tomomi Adachi, was visiting as part of Thingamajigs’ Pacific Exchange Series and performed a riveting improvised set with his Infrared Sensor Shirt. While the show at 21 Grand was not documented, Adachi’s set was similar to (though considerably longer than) a performance video from the same period found on his website.

Adachi enters the space singing eccentrically, as if to himself, and proceeds to put on a headset mic and a pink shirt strung with wires and cables. His voice, now amplified, becomes increasingly strange, as it is processed through the laptop nearby. As he and the shirt move, it becomes clear that the permutations and deformations of Adachi’s voice are not prerecorded but occur in real-time, manipulated live as he wriggles and tweaks. The sounds shift with each gesture, cascading in complex patterns that are nonetheless somehow related to Adachi’s movements within the shirt. The effect is mesmerizing—at turns hilarious, disturbing and visceral.

The Infrared Sensor Shirt, which runs on Max/MSP[2] software, was designed, developed and constructed by Adachi himself, most intensively between 2004 and 2009.[3] Sensors and switches attached to the shirt are assigned different functions and variables within the digital audio software environment. Adachi’s voice is input through the microphone and digitally processed by Max/MSP, which is controlled by the performer’s manipulation of the hardware in the shirt—although, as Adachi himself points out, the “control” is far from complete.[4]

Since the early 2000s, wearable technology has become an increasingly familiar concept, one recent example being the popular Fitbit fitness tracking device. However, Adachi’s sensor shirt sprang not from the desire to create a wearable interface, but, rather, from a search for more organic relations with his (software) tools, with the aim to “connect a sound processing techniques (sic) with a physical gesture that is accompanied with utterance.”[5] While Adachi’s sensor shirt is, to my knowledge, unique in its form and functionality, there are many glove-like digital controller/instruments currently in use, each tailored to the aesthetic of the musician who developed it for their use in performance: Laetitia Sonami’s Lady’s Glove (which she first prototyped in 1991 and flamboyantly retired from use in 2014)[6], as well as more recent examples, such as Pamela Z’s SensorPlay, Franziska Baumann’s Sensorglove[7], and pop musician Imogen Heap’s[8] glove team.

Pamela Z with SensorPlay controller. From the artist’s website; photo: Goran Vejvoda

Whether due to music’s reliance on performance instruments or due to sound being digitized earlier than video (in the late 1970s as opposed to the mid-1980s, in the case of recording technology), many of the earliest examples of digital performance entail sound-making. Facilitated by affordable integrated circuits, as well as the MIDI (Musical Instrument Digital Interface) protocol, which was standardized by the music industry in 1983, experimental artists had, by the middle of the same decade, independently created rudimentary versions of performance devices and systems which still look and function like those we see today over thirty years later. Henceforth, I examine two early works which typify not only sensor-based digital performance technology but also continuing trends in the branch of computer science known as human-computer interaction (HCI): The Hands (version 1, 1984; version 2, 1990; version 3, 2000)[9], created by composer Michel Waisvisz at Amsterdam’s Studio for Electro-Instrumental Music (STEIM)—the studio which subsequently supported the development of Sonami’s and Baumann’s instruments—and Canadian artist David Rokeby’s Very Nervous System interactive installation (developed 1982-1990).

Michel Waisvisz with The Hands, 1984. From the artist’s website

Waisvisz started work on The Hands immediately after the release of MIDI. Developed by a coalition of synthesizer manufacturers from the USA and Japan, MIDI facilitated the smooth and easy exchange of information between computers and electronic instruments such as synthesizers. For Waisvisz, as for many other artists, MIDI was nothing short of revolutionary: “MIDI suddenly allowed me to think of mini-keyboards, fitted to my hands and littered with various movement sensors to translate hand, arm and finger movements immediately into sounds.”[10] The Hands, accordingly, has characteristics of both an acoustic musical instrument played by the action of the player’s fingers, and an algorithmic composition executed by software. From the beginning, digital sensor technology blurred and complicated the ontological bounds of traditional Western musical categories of instrument, composition and performance.

David Rokeby with Very Nervous System, 1993. From the artist’s website

Trained as an experimental visual artist, David Rokeby nonetheless also used MIDI as a core element of his Very Nervous System. However, rather than following the template of musical instrument building[11] like Waisvisz, Rokeby instead conceived of an interactive installation that reconstructed video cameras into rudimentary motion trackers using MIDI to control synthesizers via a computer—a leap of imagination that preceded Microsoft’s Kinect motion-sensing game controller by more than two decades. Thus, unlike The Hands, which functioned as an instrument-like device to be “played” by a performer in a concert setting, Rokeby’s Very Nervous System engaged any passerby who entered the installation space. Audiences of the Very Nervous System were invited to themselves cause sound to happen with their bodily movements. For Rokeby, this interaction offered nothing short of a mystical experience:

The active ingredient of the work is its interface. The interface is unusual because it is invisible and very diffuse, occupying a large volume of space, whereas most interfaces are focused and definite. Though diffuse, the interface is vital and strongly textured through time and space. The interface becomes a zone of experience, of multi-dimensional encounter. The language of encounter is initially unclear, but evolves as one explores and experiences. The installation is a complex but quick feedback loop. The feedback is not simply “negative” or “positive,” inhibitory or reinforcing; the loop is subject to constant transformation as the elements, human and computer, change in response to each other. The two interpenetrate, until the notion of control is lost and the relationship becomes encounter and involvement. The diffuse, parallel nature of the interaction and the intensity of the interactive feedback loop can produce a state that is almost shamanistic. The self expands (and loses itself) to fill the installation environment, and by implication the world.[12]

The committee of the Prix Ars Electronica concurred about the merits of Very Nervous System, awarding Rokeby the Award of Distinction for Interactive Art in 1991.

Video 3

Very Nervous System (1987 version). Video of dancer Leslie-Ann Coles improvising with David Rokeby’s interactive sound installation

As computing increases in speed and sophistication, becoming central to more and more fields of human activity, computer scientists work to replace the desktop model with more sophisticated models of human computer interaction, such as Tangible User Interfaces (TUIs). Since the mid-1990s, research on TUIs has aimed to develop HCI beyond the so-called WIMP (windows, icons, pointers, menus) interface (Shaer and Hornecker 2009: 7-8). Even though innovation in digital audio control and interactivity preceded the establishment of HCI as a field, research on the design and application of interfaces allows the understanding of patterns beyond the instrumental model provided by those working in the field of music. Reflecting HCI research, I adopt the term TUI to denote devices and systems that “rely on embodied interaction, tangible manipulation, physical representation of data, and embeddedness in real space” (Hornecker and Burr 2006: 437). Having common terminology may bridge practice across genres and media, and encourage critical discourse to keep pace with the speed of innovation in the field.

The two early sensor-based works I described above highlight two key aspects of TUI: tangible manipulation and spatial interaction. Seen in this way, devices such as The Hands, which the users physically manipulate (usually but not always with the hands), function in an inverse fashion to setups such as Very Nervous System, which provide sensor-embedded spaces which users trigger by moving bodily through the installation space. Whereas the former function according to the logic of musical instruments, the latter are now being developed most actively for use in dance performance (both with and without music, as I will touch on at the end of this paper).

Kim and Seifert point out that conceptions of the role of the body in performance lag behind engineering innovations of performance technologies. The notion of the extended instrument, which “augment[s] the control dimensions of a musical instrument by means of interactive computer systems that translate and manipulate input data to output data,” is illustrative of the general discursive construction of digital control devices (2006: 140). Understanding technology as an extension of the body is a hallmark of prosthetic logic.

According to Mills (2012), throughout its development, communications technology has been widely described as prosthetics which augment human capacities; a wearable device, for example, is understood to be extending the power of the user—whether by enhancing their senses (for example, Facebook’s Oculus Rift headset, which purports to immerse wearers in a virtual reality of sight and sound) or by amplifying the power and range of expressive choices in live performance (exemplified by Waisvisz). No sooner does a new device become available, than concern about its social ramifications arises—even erupting into violent controversy, such as assaults of Google Glass headset wearers in San Francisco in 2014. I propose that, while every technological innovation inevitably spawns unforeseeable ripple effects, critiques of the pros and cons of particular devices leave underlying discursive formations unexamined. Forthwith, I present discourse-level analytic frameworks and examine their limitations and uses.

Currier takes aim at the view of technology as prosthesis, asserting that “whatever permutations arise from a prosthetic encounter between bodies and technologies, they remain bound within the logic of identity or sameness that structures all binary oppositions” (2002: 529). Such a view limits analysis of technology’s effects, since the prosthetic equation relies upon “a self-identical and unified self” as its assumed starting point, to which is added a non-self or “‘non-body’ force or entity” (530). Thus, prosthetic logic stymies understanding of anything beyond detachable (technological) modules added to a stable (human) self.

Currier also critiques the influential figure of the feminist cyborg famously articulated by Donna Haraway in 1984, linking the cyborg to the prosthetic model. Currier claims that, although the cyborg is intended to provide an alternative to masculinist and colonialist models of subjectivity, it cannot fully leave behind the logic of identity—precisely because it relies on the prosthetic model (2003: 323). The feminist cyborg, as a hybrid body “at once organic and inorganic, machine and flesh” appears to destabilize and undermine binary logic (322). Yet, the cyborg’s radical edge is blunted by the logical flaw underlying its use as analytic tool: “in order to fabricate the hybrid and intermingled cyborg one must first begin with the discrete component entities which are precisely those elaborated within the logic of identity” (323). In other words, Haraway’s cyborg ultimately reifies the categories of “body” and “technology” as ontologically distinct and stable, since the not-yet-technologized body logically precedes the cyborg’s creation. Thus, the hybrid nature of the cyborg not only ultimately maintains the distinction between human and machine; it also implies a binary opposition between human and cyborg—precisely because, by definition, a hybrid is created from entities that are not itself (323).

Because the discourse of prosthesis holds that technology extends and reshapes the human subject, it cannot escape the logic of identity—and, accordingly, the cyborg ultimately “negates any possibility of autonomy and difference of technologies” (Currier 2002: 529). Currier proposes the notion of assemblage as an alternative way to analyze and understand humans’ interaction with technology beyond the prosthetic model of “connection by addition” (531). Unlike the binary categories that comprise both the prosthetic and cyborg models, the elements that make up an assemblage cannot be understood “as unified, stable, or self-identical entities or objects” (531). Thus, elements considered as part of an assemblage resist binarism.

Utilizing assemblage theory in performance analysis entails a shift in focus, from “what things are” to “what things do” (Currier 2002: 534). Furthermore, since spatio-temporal conditions of varying scales are also part of an assemblage, analysis requires context-specific modes of thinking (2003: 326). “Becoming,” as opposed to “being,” also challenges the notion of a fixed or essential self, promoting process-based understandings (333-34). Assemblage theory also necessitates the release of “certain notions of revolution, utopia and progress” (336)—a relinquishment of teleology.

As Puar insists in her recuperation of intersectionality, social constructionist modes of political engagement can still operate in tandem with assemblage theory, just as we can retain our identities based on experiences of social belonging even as we engage with/in assemblage theory as a mode of analysis. Consonant with Currier’s shift toward becoming, Puar insists, “identification is a process; identity is an encounter, an event, an accident . . . multicausal, multidirectional, liminal” (2012: 59). If we keep in mind the fact that “the construct of the subject is itself already normative” (63), digital performance events come into focus as charged with Massumian significance as “sense, value, force” (60)—as affective intensification, as bodies, devices, spaces, energies, institutions and constitutions converge and spark into the weird liveliness of assemblage.

Adachi with Infrared Sensor Shirt, 2009. From the artist’s website

Conclusion: Reading Performance through Assemblage Theory

The energy and impulses of bodies and electronic circuitry combine and find new forms, and they are traversed by flows of light, information, signs, sociality, sexuality, conversation, and contact that give rise to differing meanings, experiences, and configurations of bodies and technologies. (Currier 2002: 535)

The air is charged with possibility. (Jasbir Puar 2012: 61)

Remembering my experience of Adachi’s 2010 live performance in Oakland, California, as I write this now in the summer heat of 2017 Japan, my understanding of his work shimmers mirage-like, warping through the multiple lenses of my many viewings of videos of his similar (but not the same) performances of the piece. As a critical viewer/listener, how can I account for the myriad complications of experience afforded by technology—as Adachi and his sensors interface with me and my laptop systems?

Tomomi Adachi’s voice, body and electronics blend. Visible and aural gestures connect to sounds that seem to originate from the mouth—but then there are those that are more clearly linked to the twitch of a hand, a hip. Why is the voice so central? Visible bodily movement joins sonic gesture in a perplexing causality. We all see that sound when the pianist pounds the keys, but this player’s digital manipulation of “his own” voice raises the stakes exponentially: Adachi’s very vocalizing underlines the presence of his moving human body as the source of that sound. Yet when that voice is deformed, what is happening to the body that I hear? What is the shape of that throat now, what movement of the tongue produced this glitch, that stutter? The computer seems to breathe, the twinned speakers exhale endless, electric. When visible gesture and audible body are combined, the sound-affect intensifies—and for this very reason, for all the ways that an electronic/vocal sound can intrude so rudely, viscerally, into my listening body. A voice processed live resists my immediate analysis. As logic retreats blinking, I fall back on tropes of power and control, the familiar human/prosthesis emerges (my cousin the cyborg), as I flounder to regain stability of my own organic self. The foreignness of the body I see/hear bears some relation to my sonic experience, even as I remember this Japanese figure as a resident of Berlin, performing in a venue that no longer exists. There are/were no words accented to configure our Otherness in comfortable lines. Years later, I am sitting in Japan and writing, watching, remembering and it feels like we are lost together, drifting in lightning-charged bursts. Of synapse flashes in time with analog applause.

My performative writing attempts to realize an expansion of critical language that can account for the augmentations of performance technology. As artists and audiences become increasingly enrapt by TUIs and other digital technologies, we would do well to ask: how such devices/networks can further our engagement in the critical processes of creation? How are communications technologies even now pivotal to our understanding of ourselves as part of/distinct from/in relation to them. (How can it be that even some hands holding iPhones can act like a rock band?[13])

Video 4

“Mubu Funk Scot Share—Atau Tanaka at TedxPantheonSorbonne.” Video demo of Tanaka’s iPhone instrument

Work that links sound and bodies ripples outward, into dance both with and without sound. For instance, a piece by composers Adrian Freed and John MacCallum, in collaboration with choreographer Teona Naccarato, which I saw in progress in 2013 at UC Berkeley’s CNMAT, places the dancer within a Kinect-bounded space; the artists choose movements that shift certain digital sound parameters (or sounds that match certain bodily gestures). The usual creative roles become blurred: is the dancer “playing” the sound through her movement? Is the composer/programmer creating an architectural space for the dancer to sound? In contrast, choreographer Yoko Ando drops sound and works directly with digital visualizations of movement, using the wearable sensor system, Reactor for Awareness in Motion, at Japan’s YCAM InterLab. One can only imagine this as the start of a ripple that have the potential to effect all genres of performance in unforeseeable futures.

Video 5

“X_Duet for dancer and kinect”: Teoma Naccarato rehearsing with interactive sound system developed in collaboration with
Adrian Freed and John MacCallum at Center for New Music and Audio Technologies (CNMAT), University of California, Berkeley (USA)
Video 6

“Reactor for Awareness in Motion (RAM) Yoko Ando with Scene ‘Line’”: Yoko Ando Joint Research and Development Project
at Yamaguchi Center for Arts and Media (YCAM) InterLab, Yamaguchi (Japan)

In conclusion, innovations in sound performance presage on-stage performances utilizing sensor technology. The combination of “voice” (as a stand-in for a human body) and “device” (digitally manipulated in real-time by the vocalist) creates something beyond the Harawayian cyborg equation of “human + technology.” Instead, the sounding assemblage implicates a far more complex set of elements and relations, not only:

the performer/composer/singer,

the player/hacker/mover/dancer,

the listener/viewer/interactive-installation-user,

but also:

the programmers who developed the software,

the engineers and factory workers who created the hardware,

all the way down to:

the electrical grid and the humming current that animate the whole system.

How critics and audiences refocus their understandings of themselves as part of such assemblages will be crucial in years to come, as tangibles, wearables and sensor-laden “mixed reality” spaces become more and more normal(ized). Similarly, artists wrestling with the implications of the more obscure(d) elements of such assemblages will be better equipped to create work that is more than R&D demonstration of devices as (saleable) prosthetic commodities. As the pace of innovation continues to increase, the question of whether and how to extricate ourselves from these quickly-morphing assemblages will fade in importance, as we imagine and enact strategies for facing our own post-cyborg existence.


[1] An expensive production which received mixed reviews; for example, J. Wakefield’s analysis at:
[2] The most recent update of this visual programming language is known as Max 7. For more details, visit:
[3] Although in an email interview (October 28, 2016), Adachi told me that a recent improvement to the work allows the shirt to be washed.
[4] Email to the author, October 28, 2016.
[5] Full statement by artist may be found at:
[9] For more on the development of this work, see Torre, Andersen and Baldè (2016).
[10] Full statement by artist may be found at:
[11] Cf. Tanaka (2000).
[12] Full statement by artist may be found at:
[13] For more information, visit:

Works Cited

Currier, Dianne. “Assembling Bodies in Cyberspace: Technologies, Bodies and Sexual Difference.” Reload: Rethinking Women + Cyberculture. Ed. Mary Flanagan and Austin Booth. Cambridge, MA: MIT Press, 2002. 519-38.

—. “Feminist Technological Futures: Deleuze and Body/Technology Assemblages.” Feminist Theory 4.3 (2003): 321-38.

Hornecker, Eva, and Jacob Burr. “Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2006. 437-46.

Kim, Jin Hyun, and Uwe Seifert. “Embodiment: The Body in Algorithmic Sound Generation.” Contemporary Music Review 25.1-2 (2006): 139-49.

Mills, Mara. “Media and Prosthesis: The Vocoder, the Artificial Larynx, and the History of Signal Processing.” Qui Parle: Critical Humanities and Social Sciences 21.1 (2012): 107-49.

Puar, Jasbir. “‘I Would Rather Be a Cyborg Than a Goddess’: Becoming Intersectional in Assemblage Theory.” philoSOPHIA 2.1 (2012): 49-66.

Shaer, Orit, and Eva Hornecker. “Tangible User Interfaces: Past, Present, and Future Directions.” Foundations and Trends in Human-Computer Interaction 3.1-2 (2009): 1-137.

Tanaka, Atau. “Music Performance Practice on Sensor-based Instruments.” Trends in Gestural Control of Music. Ed. Marcelo M. Wanderley and Marc Battier. Paris: Ircam–Centre Pompidou, 2000. 389-406.

Torre, Giuseppe, Kristina Andersen and Frank Baldè. “The Hands: The Making of a Digital Musical Instrument.” Computer Music Journal 40.2 (2016): 22-34.

*Gretchen Jude is a scholar and performing artist, and a Ph.D. candidate in Performance Studies at the University of California Davis. Her dissertation engages with intersections of voice and audio technology in Japanese experimental and popular music. She also holds an M.F.A. in Electronic Music and Recording Media from Mills College, as well as koto certification from the Sawai Koto Institute in Tokyo. Gretchen’s M.F.A. thesis was a live performance with a gestural controller for use with Japanese koto zither that she designed, built and played. In both her academic work and in her performance research, Gretchen aims to synthesize and harmonize personal, embodied experience with the rapid changes in culture and machinery that both empower and impinge upon us.

Copyright © 2017 Gretchen Jude
Critical Stages/Scènes critiques e-ISSN: 2409-7411

This work is licensed under the
Creative Commons Attribution International License CC BY-NC-ND 4.0.

Print Friendly, PDF & Email
Tangible User Interfaces in Vocal Performance: The Sounding Body as Digital Assemblage
Tagged on: