An integrated granular approach to algorithmic composition for instruments and electronics

Similar documents
Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

Stochastic synthesis: An overview

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Analysis, Synthesis, and Perception of Musical Sounds

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

Boulez. Aspects of Pli Selon Pli. Glen Halls All Rights Reserved.

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

2. AN INTROSPECTION OF THE MORPHING PROCESS

Sound and Music Computing Research: Historical References

Experiments on musical instrument separation using multiplecause

Digital music synthesis using DSP

Chapter 12. Meeting 12, History: Iannis Xenakis

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

Topic 10. Multi-pitch Analysis

A FORM THAT OCCURS IN MANY PLACES CLOUDS AND ARBORESCENCE IN MYCENAE ALPHA

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics

1 Ver.mob Brief guide

Welcome to Vibrationdata

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician?

CSC475 Music Information Retrieval

Measurement of overtone frequencies of a toy piano and perception of its pitch

A METHOD OF MORPHING SPECTRAL ENVELOPES OF THE SINGING VOICE FOR USE WITH BACKING VOCALS

Music composition through Spectral Modeling Synthesis and Pure Data

Harmonic Generation based on Harmonicity Weightings

Building a Better Bach with Markov Chains

Music for Alto Saxophone & Computer

Simple Harmonic Motion: What is a Sound Spectrum?

Oasis Rose the Composition Real-time DSP with AudioMulch

Robert Alexandru Dobre, Cristian Negrescu

Spectral toolkit: practical music technology for spectralism-curious composers MICHAEL NORRIS

Extension 5: Sound Text by R. Luke DuBois Excerpt from Processing: a programming handbook for visual designers and artists Casey Reas and Ben Fry

CTP 431 Music and Audio Computing. Basic Acoustics. Graduate School of Culture Technology (GSCT) Juhan Nam

Short Set. The following musical variables are indicated in individual staves in the score:

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

Physical Modelling of Musical Instruments Using Digital Waveguides: History, Theory, Practice

MOTIVATION AGENDA MUSIC, EMOTION, AND TIMBRE CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS

Query By Humming: Finding Songs in a Polyphonic Database

Tiptop audio z-dsp.

UNIVERSITY OF DUBLIN TRINITY COLLEGE

Affective Sound Synthesis: Considerations in Designing Emotionally Engaging Timbres for Computer Music

Lab 5 Linear Predictive Coding

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

A few white papers on various. Digital Signal Processing algorithms. used in the DAC501 / DAC502 units

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

jsymbolic and ELVIS Cory McKay Marianopolis College Montreal, Canada

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink

Extending Interactive Aural Analysis: Acousmatic Music

What s New in Raven May 2006 This document briefly summarizes the new features that have been added to Raven since the release of Raven

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Physical Modelling of Musical Instruments Using Digital Waveguides: History, Theory, Practice

MODELING AND SIMULATION: THE SPECTRAL CANON FOR CONLON NANCARROW BY JAMES TENNEY

Proceedings of the Australasian Computer Music Conference 2011

Kasper T Toeplitz DUST RECONSTRUCTION. For instruments and live electronics

Singing voice synthesis in Spanish by concatenation of syllables based on the TD-PSOLA algorithm

Chapter 1 Overview of Music Theories

CTP431- Music and Audio Computing Musical Acoustics. Graduate School of Culture Technology KAIST Juhan Nam

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

MUSICAL APPLICATIONS OF NESTED COMB FILTERS FOR INHARMONIC RESONATOR EFFECTS

Polyphonic music transcription through dynamic networks and spectral pattern identification

Supplementary Course Notes: Continuous vs. Discrete (Analog vs. Digital) Representation of Information

Combining Instrument and Performance Models for High-Quality Music Synthesis

Using Sound Streams as a Control Paradigm for Texture Synthesis

Algorithmic Composition with Project One : An Introduction to Score Synthesis

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Received 27 July ; Perturbations of Synthetic Orchestral Wind-Instrument

Introduction To LabVIEW and the DSP Board

Swept-tuned spectrum analyzer. Gianfranco Miele, Ph.D

SMS Composer and SMS Conductor: Applications for Spectral Modeling Synthesis Composition and Performance

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Advanced Signal Processing 2

Project. The Complexification project explores musical complexity through a collaborative process based on a set of rules:

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

Music Theory: A Very Brief Introduction

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

DIGITAL COMMUNICATION

International Journal of Computer Architecture and Mobility (ISSN ) Volume 1-Issue 7, May 2013

An interdisciplinary approach to audio effect classification

Timbre blending of wind instruments: acoustics and perception

Understanding Compression Technologies for HD and Megapixel Surveillance

Automatic Construction of Synthetic Musical Instruments and Performers

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )

CONTENT-BASED MELODIC TRANSFORMATIONS OF AUDIO MATERIAL FOR A MUSIC PROCESSING APPLICATION

Music Segmentation Using Markov Chain Methods

Musical Tapestry: Re-composing Natural Sounds {

S I N E V I B E S ROBOTIZER RHYTHMIC AUDIO GRANULATOR

Fraction by Sinevibes audio slicing workstation

Abstract. Warning Colors by Robert McClure. flute in G and third flute doubling piccolo), two oboes, one english horn in F, two clarinets in Bb,

Reference Guide Version 1.0

Voice & Music Pattern Extraction: A Review

Multi-Frame Matrix Capture Common File Format (MFMC- CFF) Requirements Capture

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Style Guide for a Sonology Thesis Paul Berg September 2012

An Effective Filtering Algorithm to Mitigate Transient Decaying DC Offset

The Ruben-OM patch library Ruben Sverre Gjertsen 2013

Acoustic Instrument Message Specification

Transcription:

An integrated granular approach to algorithmic composition for instruments and electronics James Harley jharley239@aol.com 1. Introduction The domain of instrumental electroacoustic music is a treacherous one. On the level of coordination between the electronic medium (tape, hard disk, real time system) and live musicians, there are many difficulties, primarily of timing. On the level of sonority, the spectral and dynamic content of electroacoustic sounds have the possibility to be far removed from standard instrumental sonorities. In addition, the balance between a live musician and amplified sounds is difficult to achieve in such a way that the live sound is not overwhelmed by either the volume or the presence of what could be extremely rich, evocative, or dramatic sounds, processed or recorded. In trying to develop a compositional approach to the mixed domain of instruments and electronics, a number of considerations must be taken into account. In the author s work, which has been oriented toward the development of compositional algorithms, a solution to the problem of integrating instruments with electronic sounds has been adopted based on the principles of granular synthesis. This paper will discuss two compositions which exemplify different treatments of the relationship between the live performer and the electroacoustic component. Both were realized in the instrument-and-tape format, but it is conceivable now that interactive versions could be developed. 2. Adaptation of Granular Synthesis As is well-known, granular synthesis developed out of a paradigm at odds with the Fourier conception of sound. If a signal is defined as a packet of sonic grains, then certain consequences result. For one thing, it is more amenable to a statistical description. From the point of view of the composer, the granular model provides new possibilities for creating rich sonorities. A sonic texture can be generated by defining the range and density of the grains, along with the parametrical details of the grains themselves. It is a relatively simple matter to create algorithms to generate such textures, as evidenced in the pioneering work of Barry Truax and Curtis Roads, among others. Extensions of the technique into formant synthesis such as FOF (Xavier Rodet) and VOSIM (Werner Kaegi) have provided means for modeling vocal and instrumental sonorities that have proved useful for mixed-media compositions. The granulation of sampled sounds has also proven to be a fruitful method for integrating instrumental and digital sonorities. Barry Truax and Horatio Vaggione have been particularly successful in this area. Work in the real-time domain based on "windowing," a related technique, has been carried out by Cort Lippe and Zack Settel using the different generations/versions of MAX. In seeking to integrate instrumental music with synthesized sonorities, an extension of the principles of granular synthesis was sought such that the instrumental part could be generated in a similar way. By extending the durations of the grains and widening the ambitus of the "grain fields," the same generative procedures can be applied to both elements. In this JIM 99-227

way, an integrated approach can be developed, while leaving contraints as to the specific nature of the sonic material as open as possible. 3. Per Foramen Acus Transire In 1986, a first attempt was embarked upon in the direction of creating an integrated compositional approach based on granular principles. The resulting composition, Per Foramen Acus Transire for flute/bass flute and tape, was completed in 1987. The electronic sounds were created on the UPIC at CEMAMu. The first step in the compositional procedure was to define a structural framework for the work. In reflecting the title, "through the eye of the needle," the overall design aimed to unveil a wide-band texture and then gradually tighten it in around a central point, to occur more or less halfway through the piece, and then to allow the music to open out again into a transformed soundworld (the flutist switching at that point to bass flute). Using a generative process based on a permutational procedure adapted from group theory processes, a double series of "focal pitches" were designated, one for each channel of the audio material, the temporal placement being generated by similar means. Having thus defined a harmonictemporal framework, parallel processes were implemented for the detailed composition of the flute and the tape parts. The program for the flute part was to gradually increase the density of the music at the same time as the range within which the pitches could be generated was being decreased. The aim was to portray the psychological tension of a person undergoing the spiritual process described in the title. To that end, as each focal pitch is reached, a new layer of material is launched. These layers, created according to an algorithm similar to that used to generate the focal pitches, are subject to the narrowing-bandwidth filtering process. Eventually, then, as the flute part reaches its maximum density, a conglomeration of several layers of material, the pitch range within which the notes can be placed is reduced to its most constrained, a single pitch. This, obviously, is the "eye of the needle." The additive process by which the several layers of music are combined for the single performer to play acts to cancel out the original durations of the sequences of notes, leaving only the attacks. The flute, then, ends up playing music of such density that it approaches a granular texture. That the score ends up on a single pitch at its point of maximal density only serves to underscore the parallel with this method of synthesis. The electronic material follows a parallel process linked to the focal pitches. Around each of these, a "block" of a certain ambitus and duration is defined. It is then filled with "grains" of sound, ranging in length from tenths of a second to tens of seconds. The "pitchspace" is not filled indiscriminately, but is defined in terms of quarter-tone intervals moving out higher and lower from the central pitch. A separate process determines the degree of density for each of these pitch levels within the block, generally decreasing in density the farther the pitch level is from the center. The aim was to create a band of sound around each of the focal pitches, the duration of each block overlapping the next (the two channels being generated independently). The waveforms used for these electronic grains were extracted from a flute sample, and they were organized according to the number of cycles in each (and, by necessity, degree of complexity), from one up to eleven. These were combined with seven families of envelopes, each containing five degrees of amplitude modification. There were, then, 385 different sonic entities to choose from for the grains. The choice of harmonically related waveforms (according to the number of cycles filling the wavetable) had the result of creating a timbrally varying sonority, overall, on the basis of grains of fixed spectrum. JIM 99-228

A further controlling element for the electronic sounds is the orientation of each grain towards the central eye pitch at the end of the first section of the piece. Thus, all of the arcs were calculated on an angle radiating from that point. At first, the glissandi are not noticeable, as the angle of the arc is calculated over a temporal distance of eight minutes. As the piece progresses, however, and as the grains become closer to the central pitch, the glissandi become more pronounced (for the longer grains, in any case). While the live flute is gradually narrowing in on the eye of the needle, so too are the electronic sounds, by means of glissandi (the focal pitches on the tape continue to be placed across the full gamut of the original pitch-space, rather than being gradually constrained in terms of registral placement. The angles of the arcs become more and more pronounced as the temporal location draws nearer to the central point. Upon reaching that central eye of the piece, at the eight-minute mark, the music is subjected to a radical transformation. The tape part contains a lengthy (ca. 1 minute) passage or block of material limited to this central pitch. Variation is found in the layering of different waveforms and envelopes, creating a rich timbral evolution, highlighted by the addition of a pre-recorded bell, tuned to approximately the same fundamental. The flutist switches to the bass flute, joining the two pre-recorded bass flutes on the tape. Once the electronic sounds and bells play themselves out, the rest of the piece consists of a trio for bass flutes. Structurally, the music is formed from the remaining strands of material left over from the first half of the piece. As each strand begins at the same time as one of the focal pitches, placed in temporal succession throughout the first section, the texture in the second section gradually thins out as each strand plays itself out. The material is distributed between the live bass flute and the two pre-recorded on the two channels of the tape. An additional element is added to the prerecorded parts; long whistle-tones that take the place of the rests assigned to the live part. Eventually, this sonority is the dominant one, with long breaks separating the final few notes or phrases of the live performer. In conclusion, Per Foramen Acus Transire represents an attempt to create an integrated compositional approach to the medium of live instrument and electronics. In this case, the electronic part takes the form of a pre-recorded tape, which is made up of both flute sounds and electronic sounds created on the UPIC on the basis of a flute sample. The conception of the piece in terms of grains or discrete elements enabled parallel, closely related processes to be developed for both the flute part and the tape. 4. Night-flowering not even sand In 1989, a different approach to the medium of live instrument and tape was implemented on the basis of compositional software written in C, and the direct digital synthesis environment of Csound. This project centered around a piece for bassoon and tape (a piece for tape alone was also created on the basis of the same materials, and the compositional software has been used for several works since). Night-flowering not even sand - I was conceived for microtonal specialist, Johnny Reinhard, and explores the domain of 31-tone equal temperament. It was completed in 1990. It would now be possible to generate the tape part in real time. The tape part was designed to act as a sonic environment, presenting a continuous texture based on the 31-tone temperament. The primary sonority is a plucked-string sound, designed on the basis of the Karplus-Strong algorithm (implemented in Csound as the pluck unit generator). While the conception is granular, in this case, the attacks of each grain are not intended to be masked in order to create a smooth texture. The attacks are distinctly perceivable, the result being more like a hyper-harp. There is also a sustained bass sonority, JIM 99-229

generated according to the same principles, but with smoother envelopes and longer durations. The compositional algorithm is built around a chaotic function, the logistic difference equation. The nonlinear output of this simple data generator is analyzed across a limited range of values, and the statistical ordering of values within that range is made use of as a governing feature of the compositional process. In the case of both the tape part and the live bassoon part, the array of values was reordered so as to privilege particular intervals around a central pitch (which changes over the course of the piece according to a separate process built from the same algorithm). In this piece, the smallest interval appears with the greatest statistical frequency, followed by the pure 5th and the neutral 3rd. In this way, there is a degree of intervallic coherence built into both the texture on tape and the succession of notes in the live bassoon part. As noted, the focal pitches for this intervallic structure are organized according to a parallel process built from the same chaotic algorithm. The two channels of the tape and the bassoon part proceed in parallel fashion, the bandwidth of possible pitches also changing according to a similar process. The density of grains fluctuates in like fashion (the variables being: mean density, range, degree of temporal variability). There are other elements that are unique to each part (the elements of the synthesis algorithms, the dynamics, the articulatory and extended-sonority capabilities of the bassoon), but the same generative algorithm is used to produce all the elements of the piece. The live performer, in this case, is not tied precisely to particular events on the tape, allowing a degree of interpretative freedom that is unusual for the medium. 5. Conclusion Granular synthesis is intended to be an alternative approach to sound synthesis (and analysis). As has been demonstrated, the principles of this method can also be adapted to the compositional process and applied to higher-level domains. The advantage of an algorithmic approach built on a granular conception of music is that different media (e.g., instruments, electronic sounds) can be unified on the basis of the underlying identity of design applied to each. Given the difficulties in coordinating the instrumental domain with the electronic domain, in terms of sonority and presentation, this approach is one way of overcoming the problems inherent to the medium of instrumental electroacoustics. References Harley, James. "Generative Processes in Algorithmic Composition: Music and Chaos." Leonardo, Vol. 28, No. 3, 1995: 221-224. --------- "Algorithms Adapted from Chaos Theory: Compositional Considerations." Proceedings of the 1994 International Computer Music Conference. Kaegi, Werner & Stan Tempelaars. "VOSIM A New Sound Synthesis System." Journal of the Audio Engineering Society Vol. 26, No. 6, 1978: 418-425. Lippe, Cort & Zack Settel. "Real-time Frequency-Domain Digital Signal Processing on the Desktop." Proceedings of the 1998 International Computer Music Conference: 142-149. Marino, Gérard, Marie-Hélène Serra, Jean-Michel Raczinski. "The UPIC System: Origins and Innovations." Perspectives of New Music, Vol. 31, No. 1, 1993: 258-269. JIM 99-230

Roads, Curtis. "Granular Synthesis." In The Computer Music Tutorial, 168-184. Cambridge, Massachusetts: The MIT Press, 1996. Rodet, Xavier, Yves Potard & Jean-Baptiste Barrière. "The CHANT Project: From the Synthesis of the Singing Voice to Synthesis in General." Computer Music Journal, Vol. 8, No. 3, 1984: 15-31. Truax, Barry. "Real-Time Granular Synthesis with a Digital Signal Processor." Computer Music Journal Vol. 12, No. 2, 1988: 14-26. ---------. "Time-Shifting of Sampled Sound with a Real-Time Granulation Technique." Proceedings of the 1990 International Computer Music Conference: 104-107. Vaggione, Horatio. "The Making of Octuor." Computer Music Journal Vol. 8, No. 2, 1984: 48-54. Xenakis, Iannis. "Markovian Stochastic Music Theory." In Formalized Music, rev. ed., 43-78. Stuyvesant, New York: Pendragon Press, 1992. JIM 99-231

JIM 99-232