Publication: M
Composer: Philippe Leroux
Date of composition: 1997
Duration: 15 minutes
Publisher: Billaudot, Paris
Commission: Südwestfunk Baden-Baden (Southwest Radio) for the Donaueschingen Festival and IRCAM-Centre Pompidou
Dedication: Carl Faia
Instrumentation: 2 piano, 2 percussion, hardware and software (Max) based samplers
Premiere: October 18, 1997, Donaueschingen Festival, Germany, by Ictus: Jean-Luc Fafchamps, Jean-Luc Plouvier: pianos, Miguel-Angel Bernat, Gerrit Nulens: percussions direction: Georges-Elie Octors.
Studio: IRCAM
Commercial recording: Nocturne : Soupir Edition, 2004. (Leroux, 2004)
(Taken from Collaborative computer music composition and the emergence of the computer music designer found here -> http://bura.brunel.ac.uk/handle/2438/11917)

Introduction to the work and some basics

The following is a rough English translation of the program note that we wrote for the premiere of the work, found at the previously cited reference:

For the realisation of the computer part, we began by analysing some piano resonances using the software AudioSculpt and Patchwork, developed at IRCAM. These resonances, obviously rich in partials, sometimes strangely inharmonic (all sounds are initially analysed for fundamentals) and dynamic (the attack of the sound, with all its transients, to its decay where there remain no more than one or two partials, sometimes very far from the fundamental). We then isolated the internal harmonies and transitions: these then are used as new recording materials, allowing for various types of interpolations made with the programme Diphone developed at IRCAM by Xavier Rodet and Adrien Lefèvre. These harmonies were finally used to create parts of pure synthesis using, in part, Csound. Thus, the synthesised sounds of the electronics are also modified by morphing re-synthesis of instrumental analysis (percussion and piano); For example, you can hear sounds synthesis with partials isolated from glockenspiel analysis, or possibly cross-synthesis of sinusoids with sounds from interpolations of complex piano resonances.

This is a work for two percussion and two pianos, an ensemble made common through the popularity of the Bartók Sonata for Two Pianos and Percussion (1937), but with electronics commissioned by and to be developed at IRCAM. Leroux had studied at the Paris conservatory and had already passed through the studios of GRM (Groupe de Recherches Musicales in Paris) before arriving in residence at IRCAM, though I did not know his music and was unaware of his electroacoustic work before I was assigned the project. While I did get to know the composer’s acoustic work, he never presented his previous electronic pieces to me. The newly commissioned work was nascent, nothing was yet written, and this is usually the best place to start with the composer.

As this was the first time I collaborated with a composer at IRCAM, I learned that the typical steps in a collaboration would be meeting the composer and discussing in detail the work. These early meetings would involve technical discussions, as well as a certain social aspect that is not definable. Working out the technical and practical understanding the composer has for electronics, understanding the wants of the composer and already trying to build a glossary of usable definitions for descriptions of sound that are non-technical (like saying “really soft” for pp):what does blue metal sound like? Then there are stages of studies or examples created to hear and explore. Sometimes this might seem a little like showing off your trick pony while the buyer decides if he wants that one or not.

In any event, there follows a period of gestation and writing or realisation by the composer and myself on decided or probable materials. This might be in the form of isolated materials, a manipulated marimba sample transposed to extreme degrees then mixed with another resonance. In this case, once the basis of the sounds were worked out, the composer would come in with pages written and we would make a demo of the electronic sounds even before we were sure of the treatments we would do to get the sounds we wanted. ProTools would be used for this period of the collaboration.

Once the different parts are ready, there is a period of intense realisation that leads to a concert or first performance. This includes creating the final concert form of the Max patch (while this would not be the case for every piece created at IRCAM, it would be mine). It is often here at this juncture that choices need to be made on whether we can continue to explore a sound or an effect and other ideas, or if it is too precarious to continue in light of the oncoming deadline. It is fine to start off a project with everything possible, as I like to do, but the process or ritual of making a Max patch to combine all the necessary parts creates a de facto filter. Anything that can’t be included in the allowed time is left behind for another day, or not.

The earlier I can get in on the project the better for the final project because there is an early understanding of the needs and the limits of the work allowing the composer to create with the electronics and not add them in later. This is not always possible, but my experience has shown that when the circumstances permit this close collaboration, the final work is generally more authentic, more successful.

2.1.2 Research topics and major aspects of the work

While there were many aspects of the work that would involve many different synthesis and audio treatment techniques, the work would be heavily influenced by the analysis re-synthesis research at IRCAM. Xaveir Rodet and his team had developed a certain number of techniques and programs for analysing and processing recorded voice. While this was mostly in Unix based binary programs created by the researchers themselves and designed to do highly specific tasks, a new program was being developed around these techniques. The Macintosh developer Adrian Lefevre had been tasked to create a GUI on the Macintosh that would enable users without programming skills to create their sounds with these techniques. I was tasked as the co-ordinator between the users (mostly Musical Assistants) and the developers due to my early adoption in using the analysis/synthesis techniques in composition. This role is, incidentally, also highly collaborative. Other aspects of importance in this work included the concert Max patch developed and programmed for the performance and working out the general complications of the audio equipment needed to interact and perform in the work. This was a period of transition in the audio world and many aspects of audio and MIDI interfacing and computer based workstations were in constant evolution. Deciding what we could and should use for the performance would impact not only the immediate playability of the piece but also its longevity and portability.

Nevertheless, the most exciting research aspect was in the rather unexpected discovery that we could morph complex sounds. Morphing had been something that was used in pictures using computer animation techniques, but sound morphing was new. There had been various attempts and even some commercial plugins within a few years that made this possible in some ways, but not like what we would be doing.

The technique had already been used in the 1994 film Farinelli to create a hybrid voice that would be that of the (now non existent) castrato singer through a combination of the voice of a soprano and that of a countertenor. In brief, this was done by analysing the respective voices then combining them in intricate combinations and sequences to create a believable, though artificial, voice. This technology was at the centre of Diphone and that is what we would be using in its earliest stages of development to create complex instrumental audio morphing.

2.1.2.1 Analysis/re-synthesis and the development of Diphone

After discussions and various experimentations, many of which ended up in the final work, it was decided that the main focus would be around a series of chords realised from the analysis data of single sampled low piano notes. These chords played on an acoustic piano would, in turn, be recorded and analysed though the Additive program that would provide instantaneous frequency, amplitude and phase information in a text file format. This would be the first step as part of the process in which Diphone would be programmed to take the analysed data, in the from of dynamic partials, and morph them from one sound segment (or phone) to another. This was a long and slow process and would eventually cause several problems for the IRCAM systems administrator: I was doing the bulk of this on the mainframe computer which meant that during my processing, any other user on the system was reduced to a fraction of a percentage of the computer processor and checking email could take several minutes instead of seconds. This is where my ignorance showed and I was quickly corrected and just as quickly learned how to program the necessary analysis during the late night hours when I would not bother other users.

In the following figures, typical analysis and re-synthesis captures from the production period are shown. In Figure 1, the IRCAM program Audiosculpt is being used to analyse and represent the audio and frequency domain graphs of a C#1 piano sample. Markers have been placed in the sonogram to delineate the harmonic evolution of the partials. The red horizontal lines mark the most prominent partials and will be exported in frequency/amplitude pairs to be used as harmonic material for the composition of the work and to create further complex chords for analysis, re-synthesis and morphing. Many samples were analysed and processed in this manner. I would then use the IRCAM program Patchwork, Figure 2, to import, filter and normalise the partials into 1/2 tone adjusted chords as shown in Figure 3. These chords would eventually be used by the composer in the composition of the work.