Skip to main content
Artistic Collaboration in an Interactive Dance and Music Performance Environment: Seine hohle Form, a project report

Abstract

Unique and largely unexplored problems face composers and choreographers as they collaborate on interactive performance works, not the least of which is settling on schemes for mapping the various parameters of human movement to those possible in the world of sound. The authors' collaborative piece, "Seine hohle Form", is used as a case study in the development of effective mapping strategies, focusing on dance gesture to real-time music synthesis. Perceptual correlation of these mapping strategies is stressed, albeit through varying levels of abstraction.

How to Cite

Weschler, R., Weiss, F. and Rovan, J.B., 2001. Artistic Collaboration in an Interactive Dance and Music Performance Environment: Seine hohle Form, a project report. Body, Space & Technology, 2(1). DOI: http://doi.org/10.16995/bst.255

Downloads

Download HTML

605

Views

179

Downloads

Project URL: www.palindrome.de (see particularly the short video excerpt of "Seine hohle Form" under the link "pictures, videos and sounds".).

1. Introduction

The use of choreographic gesture as a control component in music composition/performance for dance has been a concern of choreographers and musicians for almost half a century. As electronic instrument builders of the 20th century struggled to devise effective interfaces for their unique instruments, choreographers such as Merce Cunningham offered the surprising option of extending the concept of gestural control to the world of dance. The Cage/Cunningham experiments of the 1960s using Theremin technology to sense body motion are only one example of this experiment that still continues today.

When musical control was relinquished to dance gesture, the union of open-air (non-contact) gesture to sound raised many intriguing questions. Even though the technology has progressed to the point where current dance systems rely on sophisticated video tracking instead of the antennae of a Theremin, the cause and effect relationship between sound and gesture has remained an elusive problem. To this day, most interactive dance/music systems have relied on fairly simple relationships between gesture and sound. Meanwhile, most interactive dance/music systems have relied on fairly simple relationships between gesture and sound, such as the basic presence or absence of sound, volume control, possibly pitch control.

The lack of progress has been complicated by the tenuous threads of communication between the computer music and dance fields. Indeed, although much work has been done recently in the world of computer music by composers/performers developing and composing for gestural controllers, the world of dance has remained largely isolated from these developments.

Today's tools, however, provide the possibility of rich relationships between dance and music in interactive systems. Real-time software for music synthesis and digital signal processing (i.e., MAX/MSP, developed by Miller Puckette and David Zicarelli, and jMAX, developed at IRCAM in Paris is readily available and runs on standard desktop and laptop PCs (Macintosh and PC LINUX). Likewise, comparable developments in video image tracking/processing as a source of gestural information (e.g. Palindrome's EyeCon system) have given composers and choreographers powerful tools with which to harness the expressive gestures of dance. Still, the remarkable lack of communication between the two fields, and the often-limited concept of interaction in this context, has limited, in the authors' opinions, the expressive possibilities of such collaborative work.

Working alternately in Nürnberg, Germany, and Denton, Texas, Palindrome Inter-media Performance Group and the Center for Experimental Music and Intermedia (CEMI) have explored these issues in their ongoing work together. A body of interactive dance/computer music works is emerging, as well as a dance-specific vocabulary of gesture mappings between movement-recognition and real-time digital sound synthesis.

2. Mapping

In an interactive system, sensors are responsible for "translating" one form of energy into another. Specifically, the physical gestures of dance are translated via sensors, analog/digital converters, and so on into a signal representation inside of a computer. One the gesture is available as an abstract value expressed as computer data, however, the important question arises: what do we do with it?

"Mapping" is the process of connecting one data port to another, somewhat like the early telephone operator patch bays. In our case mapping has a very specific connotation—it means the applying of a given gestural data, obtained via a sensor system, to the control of a given sound synthesis parameter. The dramatic effectiveness of a dance, however, invariably depends on myriad factors—movement dynamics of body parts and torso, movement in space, location on stage, direction of focus, use of weight, muscle tension, and so on. And although sensors may be available to detect all of these parameters, the question remains: which ones to apply in a given setting, and then to which of the equally many musical parameters to assign it.

Herein lies the basic quandary. Making these mapping choices, it turns out, is anything but trivial. Indeed, designing an interactive system is somewhat of a paradox. The system should have components (dance input, musical output) that are obviously autonomous, but which, at the same time, must show a degree of cause-and-effect that creates a perceptual interaction. Unless the mapping choices are made with considerable care, the musical composition and choreography can easily end up being slaves to the system. In some cases, interaction might not occur at all. Not in a technical sense—the movement will indeed control the music—but in the sense that no one (except perhaps the performers) will notice that anything special is going on!

Some have argued that it is largely irrelevant whether or not an audience is aware that interaction is taking place (through technological means). Even if the artist is completely alone in experiencing the interactivity, for some it may be enough that the system of interaction "privately" affects the performer's expression within the piece. The audience is thus vicariously part of the interactive experience.

Palindrome Inter-Media Performance Group has pursued a different approach. We have attempted instead to design a degree of transparency into our collaborative works. The pursuit of which, logically raises two possibilities:

One is for the choreographer and composer to create their work especially for a given technological system. Not, of course, that every dance gesture needs to trigger every musical event—there is actually considerable playroom in this regard. Palindrome's performing experience have shown that, generally speaking, when only part of a piece is really clear and convincing in its interactive relationships, audiences tend to accept additional more complex relationships. They become "attuned", as it were, to the functionality of the piece.

The second possibility, which does not exclude the first, entails developing deliberate and targeted mapping strategies. This is a more complicated, but rewarding approach, since it means that the technical system is born out of a need to serve the artistic vision, instead of the other way around. Herein lies the central focus of our work.

Mapping strategies should focus and harness the decisive qualities or parameters of the movement and sound, while acknowledging the perceptual dimensions of dance and music. The perception of human movement or sound can, after all, differ vastly from the visual or acoustic information actually present. That is, the video-camera and computer (or other sensor system) "sees" dance differently than we do.

While this distinction may seem somewhat arcane, it lies in fact at the heart of our quest. The first step in assigning mappings is to identify these "decisive parameters" within the dance (or perhaps small scene thereof). The possibilities which EyeCon makes available are outlined below.

From this point, the work may go in two directions: On the one side, the tendency for the choreographer is to seek out parallels between these chosen movement artifacts and those available within the music control system (in our case, within the MAX programming environment). On the other side, there are compositional concerns as well. Hence, the choreography may be designed or redesigned to achieve musical phrases according too the demands of the composition.

While the amount of give-and-take in such a collaboration varies (not to mention the direction thereof -- who is "giving" and who the "taking"), some letting go of habituated methods of working and collaborating is inevitable. Either or both collaborating artists generally need to modify their artistic priorities.

Still, in the best case, such a collaborative endeavor stands to generate a vocabulary, even a semiotic structure, for dance-music communication with enormous expressive potential.

3. Gestural Coherence

Just as is true of the sound world, we do not perceive the human body in motion in a very objective or scientific way. What we perceive in dance is highly filtered and often illusory -- the choreographer and dancer work hard to achieve this effect. A given movement quality, such as "flow", may dominate our perception of a phrase so thoroughly that the individual shapes of the body go unnoticed. At another moment, geometrical shapes may override our perception of how the body is moving through space. And of course sound -- particularly musical sound -- has a powerful affect on how we perceive dance.

Our projects in Palindrome have explored these issues of perception and movement. In particular, we have concentrated on the notion of "gestural coherence"; that is, the perceptual coherence between sound and the movement that generates it. Within the context of this search, we make the following postulations:

An emergent integrity arises when the relationship between the dance and music systems is "believable".

Believability depends upon gestural coherence.

Gestural coherence is achieved through a system of mapping that mediates the two parallel structural systems (musical and choreographic).

Musical structure emerges from dance gesture through a schema that provides for a mixture of the following gesture-to-synthesis parameter mapping strategies:

- one-to-one, or "direct" mapping

- one-to-many, or "divergent" mapping

- many-to-one, or "convergent" mapping

4. Application: "Seine hohle Form"

The words "seine hohle Form" are a fragment from the poem "Gesichter" by Rainer Maria Rilke, roughly translating to "its hollow form." As a starting point for this interactive work, premiered at CEMI in November 2000, the title words serve as an emblem for the interesting challenge of creating a musical work that only exists when a dancer moves, and a dance in which movement must be approached as both functional, music-creating gesture as well as expressive or decorative elements. The collaboration between music and dance on this piece was complete; that is, the movement and sound were not designed separately, but interactively.

The choreography is affected by the live generation of sound through the use of sensors and real-time synthesis, and the resulting music is in turn shaped by these movements. There are no musical cues for the dancers, since without their movements the music is either nonexistent, or at other times, missing key elements. This method of working forced not only an inherent degree of improvisation upon the group, but also prompted a sharing of artistic roles in the working process: dancer became musician, composer became choreographer...

"Seine hohle Form" is not the first interactive computer-controlled dance. As mentioned earlier, interactive dance has a long history. Recent important contributions include the work of David Rokeby, Richard Powall, Troika Ranch, Antonio Camurri, among others. Our work may be unique, however, in the extent to which multi-dimensional mapping strategies are applied within a framework of gestural coherence.

4.1 Technique

In about half of Palindrome's works, the dancers' gestures are tracked using the EyeCon video-tracking system, designed by Frieder Weiß of Palindrome Inter-media Performance Group. EyeCon is based on frame-grabbing technology, or the capturing of video images in the computer's memory. By frame-grabbing and processing a dancer's movements, it is essentially possible to convert their gestures into computer data that can then be mapped into control of music or other media. For "Seine hohle Form" three small video cameras set up above and diagonally in front of the stage. (See Figure 1).

4.1.1 Movement Tracking and Analysis (EyeCon)

A look at the EyeCon user-interface (Figure 2) reveals five open control windows labeled: Elements, Control, Sequencer, Midi Monitor, and licht.Cfg (the name of the currently loaded EyeCon file) The "Licht.Cfg" window contains the current video image where the current position of the two dancers is seen. The green lines around the female dancer are "touchlines". These are the position sensitive components of EyeCon; when they are touched, sound (or video) events are triggered. The green box around the male dancer in the background of the image is a "dynamic field". This is the simplest form of EyeCon's movement and shape sensing apparatus.

Thus, multiple, fundamentally different parameters of dance can be applied independently and simultaneously.

The analysis features of the EyeCon video-tracking system include the following six movement parameters:

1. Changes in the presence or absence of a body part at a given position in space.

2. Movement dynamics, or amount of movement occurring within a defined field.

3. Position of the center of the body (or topmost, bottommost, left or rightmost part of the body) in horizontal or vertical space.

4. Relative positions (closeness of one dancer to another, etc.) of multiple dancers (using costume color-recognition).

5. Degree of right-left symmetry in the body -- how similar in shape the two sides of body are.

6. Degree of expansion or contraction in the body.

4.1.2 Digital Signal Processing (DSP) and Mapping (written within MAX/MSP)

The real-time sound synthesis environment was designed in MAX/MSP by Butch Rovan. A PC running EyeCon is linked to a Macintosh PowerBook running MAX/MSP, sending the gestural data gathered by EyeCon to the real-time sound synthesis parameters.

The MAX/MSP program for "Seine hohle Form", is a musical synthesis environment that provides many control parameters, addressing a number of custom-built DSP modules that include granular sampling/synthesis, additive synthesis, spectral filtering, etc. (see Figure 3).

All mapping is accomplished within the MAX/MSP environment, and changes throughout the work.

Control of the musical score to "Seine hohle Form" is accomplished through a cue list that enables/disables various EyeCon movement analysis parameters, mapping and DSP modules to be implemented centrally. Both EyeCon and MAX/MSP software components are organized as a series of "scenes", each describing a unique configuration of video tracking, mapping, and DSP. Scene changes for both computers are synchronized and can be initiated by a single keystroke from either station.

4.2 Examples from "Seine hohle Form"

The following description of excerpts from "Seine hohle Form" is certainly not complete. Even within the described scenes there is a good deal more going on than reported here. Nevertheless, such may offer an introduction to our working methods. (NOTE: a Realplayer movie excerpt of SHF is available at www.palindrome.de).

The 12 minute SHF is divided into 23 scenes (some coincide with clear changes in the choreography, such as the end of the female solo and the beginning of male, and noticeably alter the music while others are extremely subtle). In the opening scene, the first dancer (female) controls nine relatively clear and isolated additive synthesis tones with the extension of her limbs into the space around her (an example of one-to-one mapping). An algorithm in MAX/MSP modifies the pitch and timbre slightly with each extension. Meanwhile, the second dancer (male), standing with back to audience, uses small, whole-body movements to cut off quieter, whiter sounds which build continuously as long as he is not moving.

In scene 5 the male dancer manipulates a stream of loud, aggressive sound fragments derived through granular sampling. He activates the sounds through equally aggressive side-to-side torso movements. The speed and velocity of his movements shape the parameters of the granular sampling engine continuously, with many interactions between incoming gesture parameters (an example of convergent mapping).

In scene 8, male dancer finally rises from his low stance and approaches the audience. Here, his height (highest body-part from floor) controls the parameters of a real-time spectral filter, producing a thinner and more continuous musical texture the higher he rises. The effect is much subtler and less direct than what has come before and lends a sense of disorientation to his part, softening his role following the opening solo, and thus opening the way for the female dancer to begin her own solo.

5. Conclusions and Future Work

The basic technical system described in this paper has been operational for almost a year (and had been tested in performances in Munich, Dresden, Nürnberg, Buenos Aires, at the 2001 conference of the Society for Electro-Acoustic Music in the United States in Baton Rouge, Louisiana and, most recently, at the International Computer Music Conference, or ICMC, in Havana Cuba). It has, however, become increasingly clear to us that our current process for gestural mapping could be improved by creating a clearer hierarchy among the parameters that govern relationship between the video-tracking system (EyeCon) and the sound synthesis software (MAX/MSP). In particular, we are working to more clearly segregate the tasks that are assigned to each component of the system.

Of course, making use of the inexhaustible mappings between movement and sound requires an understanding of the different and potentially conflicting-goals that drive composers and choreographers. In the past, traditional models of collaboration between composers and choreographers have subjugated either dance or music, or sidestepped the question altogether by removing all correlation between movement and sound. In a collaborative work such as "Seine hohle Form," a new opportunity exists, one that results neither in subjugation nor in conceptual abstraction. Rather, this "conflict" in artistic goals is seen in the light of heightened interactivity (in the traditional inter-personal sense) by making the work of choreographer and composer inter-dependent rather than contingent; fused instead of segregated.

Acknowledgments

The authors would like to thank CEMI (Center for Experimental Music and Intermedia) and 01plus Institute for Art, Design and New Media, in Nuremberg, Germany, for their assistance and support. Thanks also to Helena Zwiauer and Laura Warren (both of whom danced and contributed to the choreography) and Jon Nelson.

Note

Material from this paper has previously appeared in COSIGN2001: Conference on Computational Semiotics in Amsterdam, September 2001 (http://www.kinonet.com/cosign2001); ISEA2000, 10th International Symposium on Electronic Art, Paris, France, December 2000 (http://www.isea.qc.ca); the 9th New York Digital Salon (Leonardo Magazine and http://www.sva.edu/salon/); Body/Machine Conference at York University, October 2001; and Cast01 Conference on Communication of Art, Science and Technology, September 2001 / GMD - Schloss Birlinghoven, Sankt Augustin / Bonn,(www.netzspannung.org/cast01).

Figures


Figure 1.


Figure 2.


Figure 3.

Share

Authors

Robert Weschler (PALINDROME Inter-media Performance Group, Nürnberg, Germany)
Frieder Weiss (PALINDROME Inter-media Performance Group, Nürnberg, Germany)
Joseph Butch Rovan (CEMI—Center for Experimental Music and Intermedia, University of North Texas, U.S.A.)

Downloads

Issue

Licence

Creative Commons Attribution 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • HTML: 2049f4ed37912a24bd97d92bf57353e3