The majority of the processing of sensory information in our brains is thought to take place in the Cerebral Cortex, which contains a population of billions of neurons, each making thousands of synaptic connections with its neighbours. A single neuron can be thought of as a cell which generates a travelling spiking signal to these connected neighbours when the voltage on its membrane exceeds a certain threshold voltage, a process which is called ‘firing’. A neuron receiving several spikes simultaneously (or within a very small time-window) is likely to have its voltage pushed beyond the threshold level and will therefore in turn, send spike signals to its connected neighbours. Furthermore, connections (synapses) between most neurons which cause spiking signals tend to become potentiated and those which do not become depleted, a phenomenon known as ‘Synaptic Plasticity’. There are also many varieties of spiking behaviour; including ‘regular spiking’ and ‘bursting’ (short bursts containing many spikes very close together in time). The dynamics of millions of such adaptive, interconnected neurons thus provides an extremely rich behaviour, especially on a collective level of description [For a more comprehensive introduction, see for example Gerstner and Kistler 2002] and patterns of firing regularly occur in large groups of neurons. The following diagram, which is taken from a mathematical simulation of a group of 1000 coupled neurons [Izhikevich et.al (2004)] shows an example of such collective firing behaviour.
The neurons are numbered on the ordinate
y-axis (with neuron number 1 at the bottom, and neuron number 1000 at
the top) and the time, which runs from zero to 1000 milli-seconds (or
one second), is on the x-axis. This is therefore a simulation of one
second’s activity of this group of 1000 artificial neurons. Every
time a neuron fires a blue dot is placed on the graph at the appropriate
time on a line horizontally drawn from that particular neuron. The dots
on the graph can thus be regarded as firing ‘events’. In the particular
graph shown, because of the plasticity of the neural connections, many
of the events are centred in four bands, which appear as a pulse or
‘wave’ of spiking events in real-time.
The spiking events are indeterminate (not predictable in advance) but are certainly not random, and, as is the case with the above scenario, can be very correlated. A rhythmic pattern such as the one pictured above is very likely to be connected with the ‘polychronous’ firing of a particular group of neurons [Izhikevich et.al. (2004)], in which the firing of a particular neuron generates a sequence of events which stimulate a large number of neurons which form a closed group with the connections between these neurons being reinforced through repeated firing of the first neuron (the group of neurons fire not in synchrony but with polychrony). It is the musicality of these sequences of firing events, or rhythms (or ‘cortical songs’ [Ikegaya et.al. 2004]) which have become of great interest to a group of interdisciplinary researchers and practitioners based at the University of Plymouth, at the interdisciplinary Centre for Computer Music Research (including Eduardo Miranda, John Matthias, Jane Grant, Tim Hodgson, Andrew Prior and Kevin McCracken) and we have begun to develop a methodology in which the firing times of the neurons trigger either sonic events or instructions for performers to perform sonic (or musical) events.
The following sound recording, for example, takes as its source a recording from a microphone dragged across the pages of a book. Every time there is a sound ‘event’ from a firing neuron in the computer program, a small grain of sound (typically of the order of 100ms long) is taken at random from the source recording and placed in the output file at the time of firing:
We have developed three approaches; triggering sound grains (packets of sound within a range of duration of approximately 10-100ms [Roads 2002]) from a recorded sample every time a neuron fires (every time there is a blue dot in a diagram similar to the one above) as in the recorded example above, which we call ‘Neurogranular Sampling’ [Matthias and Miranda 2005], triggering an array of oscillators which control a granular synthesizer, which we call ‘Neurogranular Synthesis’ [Murray, Miranda and Matthias 2006] and triggering light actuators via MIDI which are interpreted via a score by live performers [Matthias and Ryan 2007], [Ryan et al. 2007].
In this article, we will concentrate on the latter approach, the use of neuron firing events to trigger musical performers, by describing the process of the composition of a work written for 24 piece string orchestra and solo violin, ‘Cortical Songs’, which we completed earlier this year and which was commissioned by the University of Plymouth for ‘Voices 2’, a festival of contemporary music which took place in Plymouth, UK in February 2007. The piece was premiered in St. Andrew’s Church, Plymouth by the Ten Tors Orchestra, conducted by Simon Ible, with John Matthias as the solo violinist [Matthias and Ryan 2007].
There were two starting points to the development of the piece, one was the idea of the song, and the other was the idea of generating melodies and textures using neuronal network firing events, which might form rhythmic patterns or ‘cortical songs’. Two early sketches of short song-like pieces gave us an indication of the kinds of harmonies and timbres we wanted to use during the development of the piece: The first piece is played on violin plus string pizzicato accompaniment (The commission was for a piece for string orchestra) and in addition some of the textural background is provided by the neurogranular sampler, briefly mentioned above, which in this case uses a similar sample above, triggering grains of sounds taken from the recording of a microphone dragged across the pages of a book. Other textural accompaniment is provided by a physical modelling synthesizer.
The second sketch recording involved a similarly written song-like melody with an ostinato-like bass-line with a rhythm inspired by, but not taken from some of the neuronal simulations we had performed using the computer and the mathematical modelling techniques:
We decided to make these sketches the basis of the first two movements of the piece and base the process of writing the third and fourth movements directly on the neuronal firing times of groups of artificial cortical neurons. We also decided to use a similar method to link all the movements to provide an underpinning theme.
Beginning with the idea that the rhythmic firings of a spiking group of neurons could initiate rhythm in the context of the previous two sketches, we began a process of the development of melodic content. The most successful experiments began with a series of notes with a performance ‘rule’ that the note should change when one of the neurons in the group fired. In the following example the notes were G C# D G# G D# D –in a particular simulation, the following melody occurred:
This process was repeated on two other occasions with different, but harmonically related notes, which resulted in the following three part harmonic structure, which in the following recording has been repeated three times:
Following this experiment we made two decisions: firstly the structures similar to that in the previous recording made an ideal framework for the overlaying of an improvisational tune -a single instrument (in this case a violin) could make sense of the structurally formed parts and knit them together in an improvisationally composed fashion. Secondly we decided that it would be desirable to have the neural firing visible in some way to performers such that any ‘rules’ pertaining to the neural firing could be instigated in real performance time, meaning that a performer would be able to know when a neuron had fired and act upon that event in a particular preconceived way (i.e a rule might be ‘change the note that you are playing’).
With this latter decision in mind, we constructed an interface between the computer program containing the mathematical/ biological spiking neuron model and a set of LED lights. The idea was to have a single LED light connected to each neuron. As there were 24 players in the string ensemble, we constructed a model with 24 neurons, each connected via a MIDI interface to an LED light. When one of the neurons fired its connected light flashed. In performance, each member of the string orchestra had a musical part to follow in addition to the flashing light, with the score containing both composed notes and instructions about what to do when the lights flashed. In order to make decisions about what the performers should do when their lights flashed, we conducted a series of musical experiments with a string quintet, made up of the desk leaders of the Ten Tors Orchestra conducted by Simon Ible.
Several of the experiments were successful in the context of the previously developed music. These included tapping the instruments on seeing a flashing light, changing notes taken from a theme from the earlier improvised violin part and the following extract taken from the tests with the quintet in which the instrumentalists were asked to play a G note and then move a quarter tone sharp when seeing a light flash for the first time, going back to the original note on seeing a second flash, going a quarter tone flat when seeing a third flash, and returning to the original note on seeing a fourth flash and so on.
The most successful parts of the experiments with the string quintet were then integrated into the full score, which comprised of four movements. The triggering patterns were unknown to the performers, as they depended on what was going on in the network at the time of performance. The recording of the performance is reproduced below, with the kind permission of the orchestra and director, Simon Ible.
The first two movements contain elements of the song structures shown earlier. The third movement is largely constructed through the rule-based score and the light triggers alone and the fourth movement is based on the initial experiment with the improvised violin, described above. A recording of the piece, including several remixes, will be released by NonClassical records in spring 2008. We hope to develop many of the processes initiated here to make new pieces and we are also keen to work with musicians and performers to develop the both software and the compositional, performance and recording processes. The Neurogranular sampler, for example, is being adapted by artist, Jane Grant with research assistant, Tim Hodgson (financially supported by the AHRC), to develop a new work.
Roads, C, 2002 Microsound MIT Press
Miranda, E. R. and Matthias, J.R. (2005). "Granular Sampling using a Pulse-Coupled Network of Spiking Neurons", Proceedings of EvoWorkshops 2005, Lecture Notes in Computer Science 3449, pp. 539-544. Berlin: Springer-Verlag.
Murray, J., Miranda, E. R., and Matthias, J. (2006). “Real-time granular synthesis with spiking neurons”, Proceedings of Consciousness Reframed - 8th International Conference, University of Plymouth, Plymouth (UK). (Invited talk)
Matthias J.R. and Ryan E.N. 2007 Cortical Songs -Live Performance Plymouth Feb 24th, St. Andrew’s Church, Plymouth. Ten Tors String Orchestra.
Ryan E.N. Matthias J.R. Prior A. and McCracken J.K. 2007 SEMPRE conference on computer aided composition
Izhikevich E.M., Gally J.A. and Edelman G.M. 2004 ‘Spike-timing dynamics of neuronal groups’ Cerebral Cortex 14 933-944
Ikegaya Y., Aaron G., Cossart R., Aronov D., Lampl L., Ferster D. and Yuste R. (2004), “Synfire chains and cortical songs: Temporal modules of cortical activity”, Science, 304:559-564.
Gerstner W. and Kistler W (2002) ‘Spiking Neuron Models: Single Neurons, populations and plasticity’ Cambridge University Press
John Matthias is a physicist, musician, composer, singer and song-writer. He has collaborated with many recording artists including Radiohead, Matthew Herbert and Coldcut and has performed extensively in Europe including at the Pompidou Centre in Paris and at the Union Chapel in London. His first solo studio album, ‘Smalltown, Shining’, was released by Accidental Records (Lifelike) in 2001 and was described by Time Out (London) as one of the first examples of a new genre of song-writing. He is releasing a second album of songs on the Ninja Tune label in spring 2008. He has also contributed as a violinist to the scores of many albums, television programs and two feature films. He is a lecturer in the Faculty of Arts at the University of Plymouth and is published by Westbury Music Ltd.
Nick Ryan is a composer, producer and sound designer. In 2004 he won a BAFTA for ‘The Dark House’, a groundbreaking interactive radio drama, broadcast live on BBC Radio 4 in September 2003. It is the first British Academy Award ever to be awarded for Radio. He has composed extensively for film and Television including Film Four Short, ‘Spin’ (winner at the Edinburgh International Film Festival and released in Curzon Cinemas) and multi-award winning 35mm short, ‘Starched’ (Best Short Film, Milan Film Festival 2003). As a music producer Nick has collaborated with many recording artists including Vanessa Mae and Youth for EMI Records. In 2003, Nick co-authored a British Government report on the future of digital music, touring the US West Coast on a DTI trade mission.