WALLACE: COMPOSING MUSIC FOR VARIABLE REVERBERATION

Similar documents
Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

Elements of Music. How can we tell music from other sounds?

Articulation Guide. TIME macro.

Cathedral user guide & reference manual

SUBJECT VISION AND DRIVERS

Music for Alto Saxophone & Computer

installation... from the creator... / 2

QUALIA: A Software for Guided Meta-Improvisation Performance

Short Set. The following musical variables are indicated in individual staves in the score:

Westbrook Public Schools Westbrook Middle School Chorus Curriculum Grades 5-8

Simple Harmonic Motion: What is a Sound Spectrum?

MUSIC AND SONIC ARTS MUSIC AND SONIC ARTS MUSIC AND SONIC ARTS CAREER AND PROGRAM DESCRIPTION

General clarifications

FPFV-285/585 PRODUCTION SOUND Fall 2018 CRITICAL LISTENING Assignment

The purpose of this essay is to impart a basic vocabulary that you and your fellow

Rachel Hocking Assignment Music 2Y Student No Music 1 - Music for Small Ensembles

I. LISTENING. For most people, sound is background only. To the sound designer/producer, sound is everything.!tc 243 2

Choir Scope and Sequence Grade 6-12

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016

Aural Architecture: The Missing Link

Optimal Acoustic Reverberation Evaluation of Byzantine Chanting in Churches

Musical Sound: A Mathematical Approach to Timbre

Essential Standards Endurance Leverage Readiness

Curriculum Framework for Performing Arts

Computer Coordination With Popular Music: A New Research Agenda 1

SYMPHOBIA COLOURS: ANIMATOR

StiffNeck: The Electroacoustic Music Performance Venue in a Box

Jesse Nolan 12/10/02 M344: Dr. May CMP Teaching Plan for First Suite for Military Band by Gustav Holst

TEST SUMMARY AND FRAMEWORK TEST SUMMARY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Effect of room acoustic conditions on masking efficiency

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

MUSIC (MUSC) Bismarck State College Catalog 1

Concert halls conveyors of musical expressions

Marion BANDS STUDENT RESOURCE BOOK

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink

Planning for a World Class Curriculum Areas of Learning

Topics in Computer Music Instrument Identification. Ioanna Karydi

The Tone Height of Multiharmonic Sounds. Introduction

Harmonic Analysis of the Soprano Clarinet

Music in Practice SAS 2015

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Second Grade Music Curriculum

MUSIC PERFORMANCE: GROUP

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Norman Public Schools MUSIC ASSESSMENT GUIDE FOR GRADE 8

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Articulation Guide. Nocturne Cello.

Key Skills to be covered: Year 5 and 6 Skills

METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING

Music Policy Round Oak School. Round Oak s Philosophy on Music

Extending Interactive Aural Analysis: Acousmatic Music

Harmony, the Union of Music and Art

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator

PORTO 2018 ICLI. HASGS The Repertoire as an Approach to Prototype Augmentation. Henrique Portovedo 1

MUSIC GROUP PERFORMANCE

Music Theory: A Very Brief Introduction

Boulez. Aspects of Pli Selon Pli. Glen Halls All Rights Reserved.

RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION

2010 HSC Music 2 Musicology and Aural Skills Sample Answers

Skill Year 1 Year 2 Year 3 Year 4 Year 5 Year 6 Controlling sounds. Sing or play from memory with confidence. through Follow

Synthesis Technology E102 Quad Temporal Shifter User Guide Version 1.0. Dec

Chapter Five: The Elements of Music

Third Grade Music Curriculum

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering

Tiptop audio z-dsp.

MUSIC CONTEMPORARY. Western Australian Certificate of Education Examination, Question/Answer Booklet. Stage 3

OKLAHOMA SUBJECT AREA TESTS (OSAT )

Registration Reference Book

Texas State Solo & Ensemble Contest. May 26 & May 28, Theory Test Cover Sheet

Toccata and Fugue in D minor by Johann Sebastian Bach

The Elements of Music. A. Gabriele

Why We Measure Loudness

Sound Magic Imperial Grand3D 3D Hybrid Modeling Piano. Imperial Grand3D. World s First 3D Hybrid Modeling Piano. Developed by

Music Standard 1. Standard 2. Standard 3. Standard 4.

Music Appreciation Final Exam Study Guide

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

A prototype system for rule-based expressive modifications of audio recordings

Music 209 Advanced Topics in Computer Music Lecture 1 Introduction

Instrumental Music Curriculum

ANNOTATING MUSICAL SCORES IN ENP

MOTIVATION AGENDA MUSIC, EMOTION, AND TIMBRE CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS

The Art of Expressive Conducting

Project. The Complexification project explores musical complexity through a collaborative process based on a set of rules:

Fraction by Sinevibes audio slicing workstation

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

The Keyboard. Introduction to J9soundadvice KS3 Introduction to the Keyboard. Relevant KS3 Level descriptors; Tasks.

Music Study Guide. Moore Public Schools. Definitions of Musical Terms

Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments

VCE MUSIC PERFORMANCE Reading time: *.** to *.** (15 minutes) Writing time: *.** to *.** (1 hour 30 minutes) QUESTION AND ANSWER BOOK

Music Segmentation Using Markov Chain Methods

MUSIC WESTERN ART. Western Australian Certificate of Education Examination, Question/Answer Booklet. Stage 3

Articulation Guide. The Orchestral Grands.

Table Of Contents. Instrument Introduction System Requirements Getting Started Performance Page Mixer Page...

Topic 10. Multi-pitch Analysis

ONLINE ACTIVITIES FOR MUSIC INFORMATION AND ACOUSTICS EDUCATION AND PSYCHOACOUSTIC DATA COLLECTION

Copyright 2009 Pearson Education, Inc. or its affiliate(s). All rights reserved. NES, the NES logo, Pearson, the Pearson logo, and National

The Role and Definition of Expectation in Acousmatic Music Some Starting Points

Transcription:

WALLACE: COMPOSING MUSIC FOR VARIABLE REVERBERATION Filipe Lopes Centro de Investigação em Psicologia da Música e Educação Musical INET-md University of Aveiro filipelopes@ua.pt ABSTRACT Reverberation is a sonic effect that has a profound impact in music. Its implications extend to many levels such as musical composition, musical performance, sound perception and, in fact, it nurtured the sonority of certain musical styles (e.g. plainchant). Such relationship was possible because the reverberation of concert halls is stable (i.e. does not drastically vary). However, what implications surface to music composition and music performance when reverberation is variable? How to compose and perform music for situations in which reverberation is constantly changing? This paper describes Wallace, a digital software application developed to make a given audio signal to flow across different impulse responses (IRs). Two pieces composed by the author using Wallace will be discussed and, lastly, some viewpoints about composing music for variable reverberation, particularly using Wallace, will be addressed. 1. INTRODUCTION Up until the beginning of the twentieth century, engineers and architects knew little about reverberation, thus, could not plan in advance how a concert hall would sound after being built. In fact, the acoustical quality of many buildings designed for music was the result of pure chance [1]. That situation changed with Wallace Sabine (1868-1919), an engineer who is considered the father of architectural acoustics. Sabine discovered the mathematical relationship between size, materials of a room and its reverberation time [2]. His discovery revolutionised architectural acoustics. Acoustic reverberation within closed spaces is more or less stable, which means that if conditions remain the same its acoustical qualities do not change drastically. This is why people are able to assign certain sonorities to certain generic spaces (e.g. cave, a bathroom or a hall). Additionally, the reason why some musical styles sound best in certain reverberation conditions (i.e. certain halls) is due to the fact that acoustic reverberation is stable [3]. Imagine a choir singing plainchant music in a cathedral or the same performance sang in a beach. The Copyright: 2017 Filipe Lopes. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. performance would sound quite different at both places but maybe would sound more appropriate at a cathedral. This is not only because we are accustomed to hear that kind of music in reverberant places but also because it flourished and matured in cathedrals and basilicas. The advancements of analogue sound technologies during the twentieth century triggered many important investigations within the areas of music, sound and space. Apparatus such as loudspeakers and microphones had many implications for music composition and performance, particularly because it promoted new ideas to approach and work with space, thus, with reverberation. The piece I m sitting in a room (1969) by Alvin Lucier represents an example of such experiments. A good historic introduction to how musicians and engineers employed efforts and imagination to develop mechanical machines and digital algorithms to simulate reverberation can be found here [4]. Nowadays, many halls are equipped with speakers, microphones and specific materials to overcome particular demands (e.g. voice intelligibility) or to make a given space to sound in a certain way. In fact, there are rooms that offer the possibility to change its reverberation characteristics by varying its wall panels (e.g. Espace de projection in IRCAM). Such possibility allows, for example, adapting reverberation time of the space to the demands of each piece during a concert. Using computers one can compose music using simulations based on a real reverberation (i.e. the reverberation quality of a specific hall) but it is also possible to create imaginary reverberations. Regarding the first case, IR s are usually used in combination with convolution algorithms to simulate the sound of a specific sound source (e.g. voice) in a given space. This is useful, for example, to give musicians the impression of being at a specific place while they are being recorded in studio conditions [5]. Regarding the second case, one is able to develop reverberation algorithms that output a sonic result that does not derive from the real world. These are only two examples about the use of computers to develop work focused/about reverberation. The work being done relating computers and reverberation is quite extensive and, nowadays, it has a big impact to simulate acoustic situations (i.e. auralization) as well as helping designing concert halls [6]. During musical performances, the acoustical reverberation quality is usually static. This is very useful because reverberation dramatically affects our perception of the space but also musical performance (e.g. dynamics, SMC2017-294

tempo, rhythm, pitch, timbre). Composers compose music having in mind a given space (i.e. stable reverberation conditions) and it is based on that assumption that a composition can be performed in other spaces and still be faithful to the compositional ideas. In electronic music composition, reverberation is frequently presented as a way to add depth (i.e. distance) to sounds. Although many different types of reverberation can be employed in a piece, as well as the possibility of automating its parameters using Digital Audio Workstations (DAWs), the author believes that the compositional approach to reverberation is usually passive. This means that reverberation is not frequently used as a composition building block but instead as an element to highlight other features or to colour the sound. For instance, the piece Turenas (1972), by John Chowning, employs an algorithm to change the reverberation according to intensity of the direct signal, however, it was devised to study the perception of distance and movement of sound using loudspeakers (i.e. localization and distance) [7]. Nonetheless, there are compositions that use variable reverberation in real time as a structural feature, such as in the piece NoaNoa (1992) by Kajia Saariaho. In this piece, reverberation is changing according to the intensity of the sound of the flute. According to the notes in the score: The general idea here is: the quieter the sound, the longer the reverb. 2.1 Aim 2. WALLACE Wallace (see Figure 1) is a software application to foster music compositions based on variable reverberation. Its implementation design is aimed at: 1) offering an easy way to make sound travel across independent reverberations automatically 2) exploring the relationship and implications between composition and performance of music in contexts in which reverberation is constantly changing. Figure 1. Wallace 2.2 Overview Wallace makes a given audio signal to flow across different reverberations according to specific transition behaviours. Each sound signal from a given sound source is sent to a specific reverberation scheme (see Figure 2). This scheme performs real-time convolution with different IRs, thus, making the audio signal to flow across different spaces. For each reverberation scheme, the user chooses four IRs from a default collection of IRs. The next step is to choose the transition type across the IRs. The final step consists on adjusting the gain level of each IR output. Figure 2. Audio signal flow in Wallace 2.2.1 Technical Info Wallace was developed in MaxMSP [8]. By default, the total amount of possible sound sources is five. The sound sources can be sound files or live sound input (i.e. microphone). The convolution process is performed using the HISSTools Impulse Response Toolbox [9], particularly the MSP object multiconvolve~. This object performs real-time convolution. The default IRs are included in an external folder named IRs and were retrieved from The Open Acoustic Impulse Response Library (Open AIR) [10]. The user, however, can add more IRs to the database and use them. Wallace, by default, is ready to output to a quadrophonic system, yet the user is able to choose a stereo output. 2.2.2 Transistions The GUI (see Figure 3) defines a squared area to allow the user to visualize the transitions across the IRs. It features a small black circle that, according to its position, signals which IRs is being used (i.e. heard) to process the audio signal. The closer the circle is to a speaker within the defined squared area (i.e. specific number as displayed in the GUI), the louder that specific convolution process (i.e. sound result) is heard. If the circle is in the middle of the squared area, however, the sound result will be a mixture of all the sound convolutions. SMC2017-295

Figure 5. IRs chosen for a reverberation scheme Figure 3. There are a total of four transitions options to manage each reverberation scheme: off, circular, rectilinear and random. The option off means that there is no automatic transition, thus, the sound result is either stable (i.e. same reverberation, thus circle stationary) or changing according to the user (i.e. user moves the circle as desired using the mouse). The option circular makes the circle to move in a circular fashion modulated by a sine wave generator. The user defines the rate of that movement by increasing or decreasing the frequency fed to a cycle~ MSP object. The option rectilinear makes the circle to move in a rectilinear fashion between two extremes (e.g. 1 and 3). The rectilinear movement can be horizontal, vertical or crossed. The sound result at the extremes is the result of one convolution process whereas in the middle the sound result consists of a mixture of all the convolutions. Once again, the user defines the rate of that movement by increasing or decreasing the frequency fed to a phasor~ MSP object. Finally, the option random performs a random choice within the defined squared area and smoothly moves the circle to that spot. This procedure is repeated until furthermore instructions. 2.3 Workflow The first step is to choose one of two possible sound sources: file or mic. The first case refers to a sound file while the second refers to sound input (see Figure 4). The third step consists on choosing the type of transition behaviour (see 2.2.2.). Finally, one needs to decide if the reverberation scheme is delivered to four speakers (i.e. one IR for each speaker) or two speakers (i.e. two IRs for each speaker) (see Figure 6). Figure 6. Sound output options 3. VARIAÇÕES SOBRE ESPAÇO 3.1 Overview Two different pieces were composed to experiment composing music based on variable reverberation. I will now describe those pieces, particularly focusing on the impressions I collected while composing the pieces and listening to them in concert situations. 3.2 Variações sobre Espaço #1 The first piece is for soprano saxophone and live electronics only using Wallace. For the saxophone part I deliberately used musical phrases containing contrasting elements (e.g. high pitched sounds vs low pitched sounds, forte vs piano, sound vs silence). Although the formal structure of the saxophone score is linear (i.e. not open form), it is composed of many sequenced small musical gestures to enhance the contrasting elements (see Figure 7). Figure 4. Input options The second step consists in choosing the four different IRs to be used in the reverberation scheme. One is able to choose the option No IR, which means dry sound (i.e. sound not flowing to the convolution process) (see Figure 5). Figure 7. Excerpt of the saxophone score The decision to compose the saxophone score based on contrasting elements originated from previous sessions with the saxophonist. During those experimental SMC2017-296

sessions, it seemed that using sharp and contrasting sound gestures helped to perform and hear the reverberation nuances. With the inclusion of that kind of phrasing texture in the final score, I wanted to 1) listen to its sound results using Wallace 2) assess if I could experience the saxophone sound and the reverberation(s) in dialogue (e.g. overlapping sounds and pitches). During one of the rehearsals, by chance, I discovered that using the sound of the human voice, in comparison with the sound of the saxophone for the same reverberation scheme, I perceived much more clearly the different reverberations. I believe that this might be related to 1) the spoken human voice is noisier (i.e. irregular spectrum) when compared to the saxophone sound (i.e. regular spectrum), thus, more radical spectral changes occur when the audio is being processed and 2) the human voice is hardwired to our daily experience of the world, thus, the humans instantly recognize the smallest nuances. This piece was performed in May 2016 and can be heard here [11]. 3.3 Variações sobre Espaço #2 This piece is for quintet (flute, clarinet, piano, violin and violoncello), tape and live electronics (Wallace). It comprises 5 movements and the tape is comprised of distinctive soundscapes (e.g. forest, inside a church, in the countryside, late night at a small village). This instrumental setup allowed me to explore different facets of mixing/blending reverberations (i.e. IRs) when compared with the strategies I used in Variações sobre Espaço #1. Such facets include 1) the articulation of the ensemble (i.e. several sound sources) with Wallace and its consequence on the overall sound result 2) repeat pitched notes at a given pulse (e.g. repeated quarter notes) to hear its sonic nuances changing across the different reverberations 3) use contrasting instrumental textures (e.g. tutti vs solo). Some of the questions I asked myself were: Will I hear many spaces blending with each other? Will I hear different instruments in different spaces? Will I hear just a single reverberation composed of several reverberations? The instrumental texture/notation used in the beginning of the piece is very common sounding, however, with each movement, some musical aspects are frozen or simplified (e.g. note duration, the rate at which harmony changes, textures, dynamics, rhythm). In the last movement there is only a continuous and spacy melody played by the piano, only punctuated by slight gestures played by the remaining instruments. The reason I composed like that was to be able to experiment and listen to many sonic textures (e.g. complex vs simple) between the sound produced by the instruments and Wallace. During rehearsals, I discovered that I could listen and perceive many simultaneous different reverberations when there were soundscape sounds permanently playing in the background. Consequently, I recorded several and distinct soundscapes in order to build a database of background soundscapes. Most importantly, these soundscapes are not meant to stood out but instead install a quiet background space. This piece was performed twice. Each performance occurred at a different concert hall and each concert hall had contrasting natural reverberation qualities. The first concert hall had a pronounced natural reverberation (~ 2 seconds) and the second one had little natural reverberation. The experience of listening to the piece, particularly experiencing the transitions between different virtual spaces (i.e. IRs), felt quite different in each concert hall. The first two movements (i.e. more complex rhythmical textures) seemed more interesting when performed in the concert hall with little reverberation whereas the last movements (i.e. simpler rhythmical textures) felt more appropriate for the concert hall with noticeable natural reverberation. This piece was performed in November 2016 and can be heard here [12]. 4. CONCLUSIONS The main purpose of this research is to devise ideas about composing music for variable reverberation. Wallace is a resource to pursue such compositional intents. It is a digital software application developed to make a given audio signal to flow by different reverberations. In addition, Wallace offers possibilities to make automatic transitions between the different reverberations. The practical compositional work has led me to some conclusions, specifically: the spoken human voice is the sound that, from the stand point of sound perception, best illustrates different reverberations; the natural reverberation of each space plays a decisive role in the perceived sound output produced by Wallace, thus, each performance will not sound the same in concert halls with different reverberations. This might mean that there is no perfect concert hall to use Wallace, instead, each composition will be (and should be composed to be) in dialogue with the real and virtual space; lastly, the use of continuous quiet soundscape sounds imprints a sense of background space which helps Wallace s sound output to stand out. During the course of composing the aforementioned pieces, as well as implementing Wallace, I established a generic compositional approach (see Figure 8). It defines three main ideas to be addressed while composing music for variable reverberation, particularly using Wallace. The first idea suggests one to think about the balance between dry sound/open space/soundscape vs wet sound/closed space/reverberation; the second idea suggests one to think about the balance between static virtual reverberation vs variable virtual reverberation; the third idea, in the case of using variable reverberation schemes, suggests one to think about how to employ transitions between distinct reverberations. In addition, the balances aforementioned don t (should not) have to be constant during the course of a piece. Instead, it seems to me interesting to shift the balances during the performance of the piece. SMC2017-297

masses. in Proc. Int. Computer Music Conf, 2012, pp. 148-155. [10] http://www.openairlib.net [11] https://youtu.be/xnybrrhmdhg?t=32m46s [12] https://soundcloud.com/filklopes/variacoes-sobreespaco-2-mov-1-ato-3 Figure 8. Generic compositional topics to consider when composing music for variable reverberation 5. FUTURE WORK Future work includes the composition of new pieces, the inclusion of more IRs in the default database, the elaboration of documentation (e.g. video tutorials and performance videos) and the design of new models to make IR transitions within each reverberation scheme (e.g. more automatic transition movements, analyze the input sound of a sound source and map a specific audio feature to move the black circle). Furthermore, the GUI is going to be redesign and Wallace application and source code are going to be released very soon at filipelopes.net. Acknowledgments Many thanks to CIPEM- Inet-MD, Henrique Portovedo, Miguel Azguime, Miso Music Portugal, Sond Ar-Te Ensemble, João Vidinha and Rui Penha. 6. REFERENCES [1] M. Forsyth, Buildings for Music. Cambridge, MA: MIT Press, 1985 [2] L. L. Henrique, Acústica Musical. Edição 5ª. Lisboa, Fundação Calouste Gulbenkian. [3] A. P. de O. Carvalho, Acústica Ambiental. Ed. 84. Faculdade de Engenharia da Universidade do Porto, 2013 [4] B. Blesser An interdisciplinary synthesis of reverberation viewpoints in Journal of the Audio Engineering Society, 2001,49(10), pp. 867-903 [5] B. Pentcheva Hagia Sophia and Multisensory Aesthetics in Gesta, 2011, 50(2), pp. 93-111 [6] https://www.wired.com/2017/01/happensalgorithms-design-concert-hall-stunningelbphilharmonie/ [7] Chowning, J. (2011). Turenas: the realization of a dream. Communication presented at the Journées d Informatique Musicale 2011, Saint-Etienne [8] http://www.cycling74.com [9] A. Harker, P. Alexandre Tremblay, The HISSTools impulse response toolbox: Convolution for the SMC2017-298