Skip to main content
Optik, Performance and Processing

How to Cite

Jarlett, B., 2007. Optik, Performance and Processing. Body, Space & Technology, 6(2). DOI: http://doi.org/10.16995/bst.166
643

Views

3802

Downloads

This article describes the collision between live site specific performance and live digital sound and video processing in the work of Optik. It is an account of how I, as a sound and video designer, have worked together with the director Barry Edwards on making performance. 

Background

Optik is a London based performance group directed by Barry Edwards that began touring in 1981. In the work that I have been involved with since the company’s Brazil tour2 in 2000, Optik use very few or no spoken words, but instead improvise with key human actions and impulse.3 Curator-analyst Tracey Warr describes the technique as follows: 

      Optik explore moving. And
      walking
      running
      colliding
      rocking
      falling
      rolling
      laying
      standing
      sitting
      seeing
      looking
      listening
      feeling
      focusing
      waiting
      deciding
      being
      stopping (Warr, 2003). 

Walking. Taking a line for a walk. The three Optik performers walk and run in straight lines in dance studios in Sao Paulo and Campinhas. Their lines are moving sculpture in space. They make fleeting connections and collaborations. They fill a space, a void, gaps, with their moving. They fall into entrainment – walking or running together. They mirror each other. They lay down on the spot where someone else has just stood up. They walk to an internal rhythm – a body clock. They invade or do not invade invisible territories – body space, in your face.’4 

Optik performers have to respond to their own impulses, their own urge to act at any given moment. Either act, or remain silent. There need be no reason why they make that decision; there is only a decision to act – and when to do it. They often respond to the actions of other performers by joining in, pushing against, or by operating alone. This process exposes the decisions that a performer makes. As they repeat a movement there is a commitment to a repeat, but a repeat is never quite the same, the differences slowly grow. 

In Brazil Optik experimented with telepresencing, transmitting sound and video live in performance between London and Sau Paulo, the percussionist and performers split by the Atlantic, but connected by a 54kbps modem. This exerted a digital aesthetic on the group that worked.5 For the next performance in Belgrade Edwards asked me for looping digital sound.6 Optik had used looped electronic sound before (‘Second Spectacle’, Optik 1982), a tape piece made of a repeating short sound and a repeating long sound which accompanied the performance as a Kraftwerk influenced score. I responded to this request by using a live sampling and granular synthesis process. 

Granular synthesis is a process first suggested by Iannis Xenakis (1971)7 and Curtis Roads (1978)8 whereby sound is considered as a stream of many small ‘grains’ of sound produced between several hundred and several thousand times a second. The idea originates from Dennis Gabor’s (1947)9 theory of sonic quanta, indivisible units of sound from a psychoacoustic point of view, which can be reversed without perceptual change in quality. At its simplest this process separates the control of a sound's duration (time) from its pitch. 

Sound performance

It was my aim with this process to mirror that of the performer’s technique. When the performance starts the performers often reside in stillness within the space. This would equate to my beginning with nothing in my buffers. Until there was a sound I would have nothing to make a sound with. As the performers begin to move, or external sounds present themselves in the space (audience, mobile phones, passing traffic) I would absorb them into memory (using boundary microphones placed in the space) and begin to process them. As a performer might react to another performer’s movement (follow, mirror, resist, ignore) I would do so too. This is important because Edwards did not wish sound to be an accompaniment to the performers, rather one creative agent among them. 

My process used a combination of programs, Steinberg’s Wavelab for capture (any wave editor would have done, this was just the one I was most familiar with at the time), and Rasmus Ekman’s Granulabfor processing.10 Using Granulab was where my improvisation would take place. I would operate it by mapping its on screen controls to a slider rich hardware MIDI controller rather than the mouse for expressive control.  

When a sound is played in Granulab I have separate control over pitch and time. Time refers to how fast a buffer is cycled through (looped), how often a grain is produced, the beginning and end loop point. Pitch refers to how fast individual grains are played back (forward or in reverse). Other controls exist affecting grain envelope, stereophonic placement and random control of parameters. This level of control offers great expressivity, sudden changes, slow evolving textures (loops that change gradually over time). 

Deciding to capture a sound equated to making a new action. In repeating this sound it would evolve as a performer’s action would. I could change from stillness to walking, to running, to chaos (or the sonic equivalent) like a performer could. I had to choose what to do when, as a performer. My actions were felt by the performers, and became another influence in their process. 

I later transferred my processing to Cycling 74’s MaxMSP,11 at first utilising the Granular Toolkit12 written by Nathan Wolek, and later using my own synthesis engine. Using Max I gained the ability to create bespoke interfaces, ones aimed specifically at the Optik process. Integrating the capture and playback into one application significantly reduced my response time from deciding to do something and doing it. I soon found that being able to capture and playback simultaneously another rewarding possibility. This would produce a long feedback delay effect – allowing me to build loops within the space, using the space. 

Video Sampling

Video sampling was introduced into the performance with the addition of Howie Bailey to the group. He would use VJing software (Resolume13 and Visual Jockey14) to capture video from a camera, process it through various real time plug-ins and project it into the performance space.15 This process is similar to the process that I use, in that nothing is brought to the space, and great textural dynamics were possible through the use of digital processing. The difference was it moved more toward the role of the audience. Video became the first observer of the actions. As the audience you could now not only observe the performers you could also observe what Bailey was observing of the performers, which was often the close up details of their expressions, and movements. This was also true in some way of the sound but became more apparent and perhaps more important with the visual. The performers themselves would not have the same awareness of the video as the audience, and would not interact with is as freely as they did the sound. Video took the role as it often does of holding and sustaining moments in the performance. Extracting the details that the observer might otherwise miss. 

‘Xtasis’, Optik plus guests Temenos, video processing by H Bailey, Black Watch Drill Hall, Montreal, Canada. Photo by Alain Décarie

Dust

Over the past year Edwards has taken the work of Optik in a new direction. In the initial experiments (which resulted in the creation of the short film Dust) he began to introduce physical objects into the performance. With these objects came a prescribed shape to the performance. In particular these objects took the form of one ton of house bricks. The shape prescribed by Edwards for the performance was for the five actors to set about moving the bricks from their initial state (a disorganised pile) to the other side of the space where they would create a structure from the bricks.  

The experiment continued to expose the choices and actions of the performers to the audience. I created a new system using MaxMSP, Jitter16 and Auvi17 plugins. With this system I extended the role that Bailey began. I was now using this technology to extend moments of movement and sound.  

I still operated as a performer, working with this material, making choices, responding to the other performers. The performers now had a purpose, but still I had a connection with them. I was able to build textures over time that increased tension, made them work harder, move faster/slower, or sustain a dynamic.  

The video processing I used was based around frame differencing - taking the previous frame from the current frame resulting in only the change. This meant I was sampling movement rather than simply visual. This simplification in processing still retained the important elements that Bailey was working with. I found that my focus would shift from sound to video and back again. I had technically integrated the two processes, but they remained side by side. 

Stills from ‘Dust’. Editor: Nada Stevanovic

Shiver

The next development in this experiment was collaboration between Edwards and poet Andrea Brady.18 Brady was asked to choose 20 words as a starting point and inspiration to make performance. The subsequent performance was described as ‘the human story that weaves its way around these words’.19 The bricks became props. The words were supplied to the audience as a booklet: an invitation to read the words together with Andrea’s commentary on their meanings and associations through history.  

A guitarist, Luke Edwards collaborated with me to create the soundtrack. Luke Edwards composed melody and chord progressions, which like the words and the bricks had a set place within the performance. I processed and repeated his sound, using delays, granular effects, comb filters and modulation.  

This work had set structure, and narrative – it was not improvised. The timing of the performance however was not set – the work still drew a great deal from previous work, especially in relation to human action and impulse. Emotion and action initiated by the 20 words were explored until their resolution. The words themselves were never said, but were present. The sound score was triggered by transitions between sections that could only be felt, not known. The performers too had to work from dynamic transitions in the sound that were left undefined in pitch, timbre or tempo, they were changes, or builds that existed as composition but were never the same twice. There was no written score to follow. 

A film version of ‘16.Shiver’ is currently being planned, incorporating footage from the live performance with new text from Andrea Brady written for a solo performer. 
 

References & Bibliography

1Originally presented as a paper at Collision 2006 – Symposium on Inter-arts Research and Practice, University of Victoria, Canada September 2006

2In the presence of people, 19/10/00, Optik, performed simultaneously (internet tele-presence performance) in Kompanhia Teatro Multimedia, Sao Paulo Brazil and Brunel University, Uxbridge UK.

3For a more detailed account of Optik’s use of sound and performer practice that includes the earlier 1981 onwards cycle of work see ‘Body Waves Sound Waves : Optik Live Sound and Performance’ Edwards, B and Jarlett, B in ‘Performance and Technology’ Ed. Broadhurst, S and Machon, J Palgrave Macmillan 2006

4T. Warr, ‘A Moving Meditation on a Dead Line’ Performance Research Journal Routledge (December 2003).

5See ‘A Tele-Presence Experiment : Optik in Sao Paulo and London’ Edwards, B BST Journal Vol 1 No  2 [http://people.brunel.ac.uk/bst/vol0102/index.html]

6Taking Breath, 20-21/10/01, Optik, Studentski Kulturni Centar (SKC) Belgrade, Serbia.

7I. Xenakis, ‘Formalized Music; Thought and Mathematics in Composition’ (Bloomington: Indiana University Press 1971).

8C. Roads "Granular Synthesis of Sound." Computer Music Journal 2(2): 61-62. (1978).

9D. Gabor ‘Acoustical quanta and the theory of hearing’ Nature, 159(4044), 591-594 (1947).

10Granulab can be found at http://hem.passagen.se/rasmuse/Granny.htm

11Cycling 74’s MaxMSP can be found at http://www.cycling74.com

12Nathan Wolek’s Granular Toolkit can be found at http://www.nathanwolek.com/

13Resolume can be found at http://www.resolume.com

14Visual Jockey http://www.visualjockey.com

15See ‘Meditation on Space : Talking to Optik’  Smart, J  BST Journal Vol 4

16Cycling 74’s Jitter can be found at http://www.cycling74.com

17Auvi plug-ins, written by Kurt Ralske, can be found at http://auv-i.com/

18See Andrea Brady’s published work at www.barquepress.com

19Programme notes to ’16.Shiver’ 2006. ‘16. Shiver’ was performed on the 11th June 2006 at Theatre Technis, London, an Optik production in partnership with piLab supported by PI Network. 


Ben Jarlett

Ben Jarlett MSc BEng is a research assistant at Brunel University and undertaking PhD research at Bath Spa University into programming and interface design for computer based improvisation and live performance using Max, MSP and Jitter. He has been performing with Optik since 2000. 

Share

Authors

Ben Jarlett

Downloads

Issue

Licence

Creative Commons Attribution 4.0

Identifiers

File Checksums (MD5)

  • PDF: fe0c3da20f03cf7e7808be35807b9460
  • HTML: f53d828d5533d2162d46958df966c048