Signature redacted TOD MACHOVER. Signature redacted. HYPERCOMPRESSION Stochastic Musical Processing. A ce. Si /ntr NOV LIBRARIES ARCHIVES

Size: px
Start display at page:

Download "Signature redacted TOD MACHOVER. Signature redacted. HYPERCOMPRESSION Stochastic Musical Processing. A ce. Si /ntr NOV LIBRARIES ARCHIVES"

Transcription

1 HYPERCOMPRESSION Stochastic Musical Processing ARCHIVES MASSACHUSETTS INSTITUTE OF TECHNOLOGY NOV LIBRARIES Charles J. Holbrow Bachelor of Music University of Massachusetts Lowell, 28 Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning, in partial fulfillment of the requirements for the degree of Master of Science in Media Arts and Sciences at the Massachusetts Institute of Technology September 215 Massachusetts Institute of Technology All rights reserved. Si /ntr A ce Author: Author: (I CHARLES HOLBROW Program in Media Arts and Sciences August 7, 215 Certified by: Signature redacted TOD MACHOVER riel R. Cooper Professor of Music and Media Program in Media Arts and Sciences Thesis Supervisor Accepted by: Signature redacted PATTIE MAES Academic Head Program in Media Arts and Sciences

2 HYPERCOMPRESSION Stochastic Musical Processing Charles J. Holbrow Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning, in partial fulfillment of the requirements for the degree of Master of Science in Media Arts and Sciences at the Massachusetts Institute of Technology September 215 Massachusetts Institute of Technology All rights reserved. Abstract The theory of stochastic music proposes that we think of music as a vertical integration of mathematics, the physics of sound, psychoacoustics, and traditional music theory In Hypercompression, Stochastic Musical Processing we explore the design and implementation of three innovative musical projects that build on a deep vertical integration of science and technology in different ways: Stochastic Tempo Modulation, Reflection Visualizer, and Hypercompression. Stochastic Tempo Modulation proposes a mathematical approach for composing previously inaccessible polytempic music. The Reflection Visualizer introduces an interface for quickly sketching abstract architectural and musical ideas. Hypercompression describes new technique for manipulating music in space and time. For each project, we examine how stochastic theory can help us discover and explore new musical possibilities, and we discuss the advantages and shortcomings of this approach. Thesis Supervisor: TOD MACHOVER Muriel R. Cooper Professor of Music and Media Program in Media Arts and Sciences

3 HYPERCOMPRESSION Stochastic Musical Processing Charles Holbrow The following person served as a reader for this thesis: Thesis Reader: Signature redacted Joseph A. Paradiso Associate Professor of Media Arts and Sciences Program in Media Arts and Sciences Massachusetts Institute of Technology

4 HYPERCOMPRESSION Stochastic Musical Processing Charles Holbrow The following person served as a reader for this thesis: Thesis Reader: S ignature redacted James A. Moorer Principal Scientist Adobe Systems, Incorporated

5 Contents Introduction 6 Background 1 Temporal Domain: Stochastic Tempo Modulation 27 Spatial Domain: Reflection Visualizer 34 The Hypercompressor 39 De L'Experience 51 Discussion and Analysis 55 Bibliography 66

6 1 Introduction In his 1963 book, Formalized Music, Composer, Engineer, and Architect lannis Xenakis described the foundation for his own reinterpretation of conventional music theory: "All sound is an integration of grains, of elementary sonic particles, of sonic quanta. Each of these elementary grains has a threefold nature: duration, frequency, and intensity." Instead of using high-level musical concepts like pitch and meter to compose music, Xenakis posited that only three elementary qualities (frequency, duration and intensity) are necessary. Because it is impractical to describe sounds as the sum of hundreds or thousands of elementary sonic particles, he proposed using statistical and probabilistic models to define sounds at the macroscopic level, and using similar mathematical models to describe other compositional concepts including rhythm, form, and melody. Xenakis named this new style stochastic, and he considered it a generalization of existing music theory: High-level musical constructs such as melody, harmony, and meter are mathematical abstractions of the elementary sonic particles, and alternative abstractions, such as non standard tuning, should exist as equals within the same mathematical framework. Music composition, he claimed, requires a deep understanding of the mathematical relationship between sonic elements and musical abstractions, and music composition necessarily involves the formulation of new high-level musical constructs built from the low-level elements. Music and Mathematics Today Today, just over 5 years after Formalized Music was first published, some of Xenakis' ideals have been widely adopted by musicians and composers. As computers, amplifiers, and electronics become ubiquitous in the composition, production, and performance of music, the line between composer and engineer becomes increasingly indistinct.

7 HYPERCOMPRESSION 7 As a result, a deep understanding of the mathematics of music is also increasingly valuable to musicians. The most common tools for shaping sounds typically use mathematical language, such as frequency, milliseconds, and decibels, requiring us to translate between mathematical and musical concepts. For example, the interface for a typical electronic synthesizer will include the following controls: " Attack time, a duration measured in milliseconds " Filter frequency, measured in cycles per second " Sustain level, intensity, measured in decibels Countless high-level interfaces for manipulating sound have been created, but no particular type of abstraction has been widely adopted. It would appear that the most useful (and the most widely used) engineering tools are the simplest: i. The equalizer, a frequency specific amplifier 2. The delay, a temporal shift on the order of milliseconds 3. The compressor, an automatic gain control Sound and Space It is curious that the most useful tools for engineering sound would also be the simplest: The compositional equivalent would be composing by starting with pure sine waves as Xenakis originally suggested. From an engineering perspective, Xenakis' three sonic elements are rational choices. By summing together the correct recipe of sine tones, we can construct any audio waveform. However, a waveform is not the same as music, or even the same as sound. Sound is three-dimensional, sound has a direction, and sound exists in space. Could space be the missing sonic element in electronic audio production? If we design our audio engineering tools such that space is considered an equal to frequency and intensity, can we build high-level tools that are as effective as the low level tools we depend on, or even as effective as acoustic instruments? The three projects described in this thesis are directly inspired by lannis Xenakis, and rest on the foundation of stochastic music: Stochastic Tempo Modulation, Reflection Visualizer, and Hypercompression. Each project builds on existing paradigms in composition or audio engineering, and together they treat space and time as equals and as true elements of music.

8 HYPERCOMPRESSION Stochastic Tempo Modulation Music and time are inseparable. All music flows through time and depends on temporal constructs - the most common being meter and tempo. Accelerating or decelerating tempi are common in many styles of music, as are polyrhythms. Music with multiple simultaneous tempi or polytempic music is less common, but still many examples can be found. Fewer examples of music with simultaneous tempi that shift relative to each other exist, however, and it is difficult for musicians to accurately perform changing tempi in parallel. Software is an obvious choice for composing complex and challenging rhythms such as these, but existing compositional software makes this difficult. Stochastic Tempo Modulation offers a solution to this challenge by describing a strategy for composing music with multiple simultaneous tempi that accelerate and decelerate relative to each other. In chapter 3 we derive an equation for smoothly ramping tempi to converge and diverge as musical events within a score, and show how this equation can be used as a stochastic process to compose previously inaccessible sonorities. 1.2 Reflection Visualizer Music and space are intimately connected. The first project, described in chapter 4, introduces an interface for quickly sketching and visualizing simple architectural and musical ideas. Reflection Visualizer is a software tool that lets us design and experiment with abstract shapes loosely based on two-dimensional acoustic lenses or "sound mirrors." It is directly inspired by the music and architecture of Xenakis. 1-3 Hypercompression Time and space are the means and medium of music. Hypercompression explores a new tool built for shaping music in time and space. The tool build on the dynamic range compression paradigm. We usually think of compression in terms of reduction: Data compression is used to reduce bit-rates and file sizes, while audio compression is used to reduce dynamic range. Record labels' use of dynamic range compression as a weapon in the loudness war', 2 has resulted in some of today's music recordings utilizing no more dynamic range than a 199 Edison cylinder. 3 A deeper study of dynamic range compression, however, reveals more subtle and artistic applications beyond that of reduction. A 'Beginning in the 199s, record labels have attempted to make their music louder than the music released by competing labels. "Loudness War" is the popular name given to the trend of labels trying to out-do each other at the expense of audio fidelity. 2 Emmanuel Deruty and Damien Tardieu. About Dynamic Processing in Mainstream Music. AE5: Journal of the Audio Engineering Society, 62(1-2):42-56, 214. ISSN Bob Katz. Mastering Audio: The Art and Science. Focal Press, 2nd edition, 27. ISBN

9 HYPERCOMPRESSION 9 skilled audio engineer can apply compression to improve intelligibility, augment articulation, smooth a performance, shape transients, extract ambience, de-ess vocals, balance multiple signals, or even add distortion. 4 At its best, the compressor is a tool for temporal shaping, rather than a tool for dynamic reduction. Hypercompression expands the traditional model of a dynamic range compressor to include spatial shaping. Converting measurement of sound from the cycles per second (in the temporal domain) to wavelength (in the spatial domain) is a common objective in acoustics and audio engineering practices. 5 While unconventional, spatial processing is a natural fit for the compression model. The mathematics and implementation of the Hypercompressor are described in detail in chapter 5. 4Alex Case. Sound FX: Unlocking the Creative Potential of Recording Studio Effects. Focal Press, 27. ISBN G Davis and R Jones. The Sound Reinforcement Handbook. Recording and Audio Technology Series. Hal Leonard, ISBN Performance Hypercompression was used in the live performance of De L'Experience, a new musical work by composer Tod Machover for narrator, organ, and electronics. During the premier at the Maison Symphonique de Montreal in Canada, Hypercompression was used to blend the electronics with the organ and the acoustic space. A detailed description of how Hypercompression featured in this performance is also discussed in chapter Universality At the MIT Media Lab, we celebrate the study and practice of projects that exist outside of established academic disciplines. The Media Lab (and the media) have described this approach as interdisciplinary, cross-disciplinary, anti-disciplinary, and postdisciplinary; rejecting the cliche that academics must narrowly focus their studies learning more and more about less and less, and eventually knowing everything about nothing. The projects described here uphold the vision of both Xenakis and the Media Lab. Each chapter documents the motivations and implementation of a new tool for manipulating space and sound. Each project draws from an assortment of fields including music, mathematics, computer science, acoustics, audio engineering and mixing, sound reinforcement, multimedia production, and live performance.

10 2 Background In this chapter, we review the precedent for contemporary explorations of time and space in music. While far too many projects exist to cover them all, we focus on projects that are either particularly impactful or particularly relavant to the projects described in this thesis. We conclude with a study of Iannis Xenakis' involvement in the Philips Pavilion at the 1958 Brussels World Fair, which made particularly innovative use of sound and space. Early Spatial Music Western spatial music emerged during the renaissance period. The earliest published example of spatial music was by Adrian Willaert in The Basilica San Marco in Venice, where Willaert was maestro di capella, had an interesting feature: two separate pipe organs facing each other across the chapel. Willaert took advantage of this unusual setup by composing music for separate choirs and instrumental groups adjacent the two organs. Spatially separate choirs soon became a fashion and gradually spread beyond Venice, as more and more spatially separated groups were incorporated into composition. In honor of Queen Elizabeth's 4oth birthday in 1573, Thomas Tallis composed Spem in alium, a choral piece with 4 separate parts arranged in eight spatially separated choirs. Interest in spatial composition declined toward the end of the Baroque period, and was largely avoided until the Romantic period. Beriloz' Requiem in 1837, Giuseppe Verdi's Requiem in 1874, and Mahler's Symphony No. 2 in 1895 all feature spatially separated brass ensembles. Tempo Acceleration and Deceleration Chapter 3 is concerned with oblique, similar, and contrary tempo accelerations and decelerations in the context of polytempic (with two or more simultaneous tempi) music. The tempo indicators commonly seen today, such as allegro and adagio, emerged during the 1 7 th century in Italy. While these markings partly express a mood (gaily and with leisure, 1 Richard Zvonar. A History Of Spatial Music, URL https: //pantherfile uwm.edu/ kdschlei/www/files/a- historyof- spatial-music.html Do not to confuse polytempic music with polymetric and polyrhythmic music. Polyrhythms and polymeters are common in West African musical tradition and appear much earlier in Western music than polytempi.

11 HYPERCOMPRESSION 11 respectively), rather than a strict tempo, they where much easier to follow than the proportional system (based on tempic ratios such as 3:2 and 5:4) that they replaced. 2 The intentional use of gradual tempo changes likely evolved from the unconscious but musical tempo fluctuations of a natural human performance. We can see examples of the purposeful manipulation of tempo in the Baroque period. Monteverdi's Madrigali guerrieri from 1638 includes adjacent pieces: Non havea Febo ancora, and Lamento della ninfa. The score instructs to perform the former piece al tempo della mano (in the tactus of the conducting hand), and the latter a tempo del'affetto del animo e non a quello della mano (in a tempo [dictated by] emotion, not the hand). While Monteverdi's use of controlled tempo was certainly not the first, we are particularly interested in gradual tempo changes in polytempic compositions, which do not appear in Western music until near the beginning of the 2th century. 2 C Sachs. Rhythm and Tempo: A Study in Music History. W.W. Norton and Company, th Century Modernism As the Romantic period was coming to an end, there was a blossoming of complexity, diversity, and invention in contemporary music. While Performers developed the virtuosic skills required to play the music, composers also wrote increasingly difficult scores to challenge the performers. 3 Works by Italian composer Luciano Berio illustrate the complexity of contemporary music of the time. Beginning in 1958, Berio wrote a series of works he called Sequenza. Each was a highly of highly technical composition written for a virtuosic soloist. Each was for a different instrument ranging from flute to guitar to accordion. In Sequenza IV, for piano, Berio juxtaposes thirty-second note quintuplets, sextuplets, and septuplets (each with a different dynamic), over just a few measures. 3 D. J. Grout, J. P. Burkholder, and C. V. Palisca. A History of Western Music. WW Norton, th edition, 26. ISBN Polytempic Music Western polytempi can be traced to Henry Cowell's book, New Musical Resources, first published in 193, wherein Cowell states: "Rhythm presents many interesting problems, few of which have been clearly formulated. Here, however, only one general idea will be dealt with-namely, that of the relationship of rhythm, which have an exact relationship to sound-vibration, and, through this relationship and the application of overtone ratios, the building of ordered systems of harmony and counterpoint in rhythm, which have an exact relationship to tonal harmony and counterpoint." 4,5 4 Examples from New Musical Resources are from the 3rd edition, published in Henry Cowell. New Musical Recourses. Cambridge University Press, Cambridge, 3rd edition, ISBN

12 HYPERCOMPRESSION 12 Cowell goes on to describe a system of ratios for tempi: If we think of two parallel tempi, one going twice the rate of the other, it is akin to the octave pitch interval, one fundamental frequency being twice the other. Similarly, the vibration ratio of 2:3 can be thought of as a fifth, and some rhythmic relationships are "harmonious", while others are "dissonent." This is nearly identical to the proportional tempo technique that was displaced in Italy in the 16oos, but Cowell does eventually introduce the concept of polytempic music: "The use of different simultaneous tempi in a duet or quartet in opera, for instance would enable each of the characters to express his individual mood; such a system might effectively be applied to the famous quartet from Rigoletto, in which each of the characters is expressing a different emotion." This example is closer to what we are interested in, but does not include simultaneous tempo changes. However, Cowell takes the idea a step further, illustrating the possibility of parallel tempo acceleration with figure 2.1. While he was not a mathematician, Cowell did understand that there were some unanswered complications surrounding simultaneous tempo changes. While describing polytempic accelerations he notes: "AM. -" MM.72 Figure 2.1: Polytempic tempo transitions as illustrated by Henry Cowell in Cambridge University Press "For practical purposes, care would have to be exercised in the use of sliding tempo, in order to control relation between tones in a sliding part with those in another part being played at the same time: a composer would have to know, in other words, what tones in a part with rising tempo would be struck simultaneously with other tones in a part of, say, fixed tempo, and this from considerations of harmony. There would usually be no absolute coincidence, but the tones which would be struck at approximately the same time could be calculated." It is possible to calculate exactly when tones in an accelerating tempo will be struck. n the examples shown in figure 2.1, the linear tempo accelerations only rarely yield satisfactory results. Figure 2.1 does not show how many beats or measures elapse during the tempo acceleration, but with linear acceleration shown, the parallel tempi are likely to be out of phase once the tempo transition is complete. This is described in more detail in chapter 3. Modernism and Rhythmic Complexity During the Modernist period, many composers sought new ways to use time and space as compositional elements, and polytempic music was relatively unexplored. Traditional music notation is not well-equipped to handle acceleration with precision. The conventional way to describe gradual tempichanges is to annotate the

13 HYPERCOMPRESSION 13 score with notes like ritardando (gradually slowing) and accelerando (gradually accelerating), coupled with traditional Italian tempo markings like adagio (slow, stately, at ease) and allegro (fast, quickly, bright). Exact tempo rates can be explicitly specified with an M.M. 6 marking. It is not realistic to expect a performer to be able to follow a precise mathematical acceleration. This did not stop to modernist composers from finding creative ways to notate surprisingly precise polytempic compositions using only the conventional notation: 6 1n a musical score, M.M. stands for Maelzel's Metronome, and is accompanied by a number specifying the beats per minute. 1. Groups of tuplets layered against a global tempo, as used by Henry Cowell (Quartet Romantic, ) and Brian Fernyhough (Epicycle for Twenty Solo Strings, 1968). 2. Polymeters are notated against a global tempo, and the value of a quarter note is the same in both sections, as in Elliott Carter's Double Concerto for Harpsichord and Piano with Two Chamber Orchestras, 1961 and George Crumb's Black Angels, Sections are notated without meter. Notes are positioned horizontally on the leger linearly, according to their position in time. Conlon Nancarrow (Study No. 8for Player Piano, 1962) and Luciano Berio (Tempi Conceriati, ). 4. The orchestra is divided into groups, and groups are given musical passages with varying tempi. The conductor cues groups to begin (Pierre Boulez, Rituel: In Memoriam Maderna,1974). 5. One master conductor directs the entrances of auxiliary conductors, who each have their own tempo and direct orchestral sections (Brant Henry, Antiphony One for Symphony Orchestra Divided into 5 Separated Groups, 1953). Charles Ives and The Unanswered Question One composer, Charles Ives, did write polytempic music before New Musical Resources was published. Ives was an American composer whose works were largely overlooked during his lifetime. One of these, his 198 composition, The Unanswered Question, is remarkable in that it incorporates both spatial and polytempic elements. In this piece, the string section is positioned away from the stage, while the trumpet soloist and woodwind ensemble are on the stage. A dialogue between the trumpet, flutes, and strings is written into the music, with the trumpet repeatedly posing a melodic question "The Perennial Question of Existence." Each question is answered by the flute section. The first response is synchronized with the trumpet part, but subsequent responses

14 HYPERCOMPRESSION 14 accelerate and intentionally desynchronize from the soloist. Ives included a note at the beginning of the score which describes the behavior of the "The Answers": This part need not be played in the exact time position indicated. It is played in somewhat of an impromptu way; if there is no conductor, one of the flute players may direct their playing. The flutes will end their part approximately near the position indicated in the string score; but in any case, "The Last Question" should not be played by the trumpet until "The Silences" of the strings in the distance have been heard for a measure or two. The strings will continue their last chord for two measures or so after the trumpet stops. If the strings shall have reached their last chord before the trumpet plays "The Last Question", they will hold it through and continue after, as suggested above. "The Answers" may be played somewhat sooner after each "Question" than indicated in the score, but "The Question" should be played no sooner for that reason. Ives gave the performers license over the temporal alignment, but he made it clear that the parts should not be played together. Gruppen Ives' polytempic compositions from the first half of the 2th century are somewhat of an exception. Polytempi was not widely explored until well after New Musical Resources was published. One famous example is Karlheinz Stockhausen's Gruppen for three orchestras ( ). Managing parallel tempi that come in and out of synchronicity is always a challenge with polytempic music, and Stockhausen found an effective, if heavy-handed, solution with a system of discrete tempo changes. Each of the three orchestras was to have it's own conductor, and the conductor would listen for a cue carefully written in one of the other sections. That cue would signal to the conductor to begin beating a silent measure at the new tempo and prepare the new orchestra to begin playing. Stockhausen did not say that he was inspired by New Musical Resources directly, but his famous essay How Time Passes describes how he chose the tempic ratios used in Gruppen. Instead of basing the tempo scales on simple pythagorean relationships, Stockhausen chose the relationships based on the 1Y2 ratio of adjacent notes in equal tempered tuning. Conlon Nancarrow Conlon Nancarrow is best known for his incredibly complex player piano scores, and is recognized as one of the first composers to realize the potential of technology to perform music beyond

15 HYPERCOMPRESSION 15 human capacity. Unlike Stockhausen, Nancarrow did acknowledge the influence of Cowell's New Musical Resources on his own works. His compositions for the player piano, beginning with Study for Player Piano No. 21, did incorporate polytempic accelerations 7. While some of Nancarrow's compositions do feature many simultaneous tempi, (Study No. 37 features 12 simultaneous tempi) 8, a rigorous mathematical approach would be required for all 12 tempi to accelerate or decelerate relative to each other and synchronize at pre-determined points. Interestingly, Nancarrow said in a 1977 interview that he was originally interested in electronic music, but the player piano gave him more temporal control. 9 New Polytempi 7 Nancy Yunhwa Rao. Cowell's Sliding Tone and the American Ultramodernist Tradition. American Music, :23(3): , 25 1 John Greschak. Polytempo Music, An Annotated Bibliography, 23. URL greschak. com/polytempo/ ptbib.htm 9 Charles Amirkhanian. An Interview with Conlon Nancarrow, URL https ://archive.org/details/am The many different approaches to polytempi in modernist music all have one thing in common: They all wrestle with synchronicity. Human performers are not naturally equipped to play simultaneous tempi, and composers must find workarounds that make polytempic performance accessible. The examples described in this chapter exist in one or more of the following categories: 1. The tempo changes are discrete rather than continuous. 2. The music may suggest multiple tempi, bar lines of parallel measures line up with each other, and the "changing" tempi are within a global tempo. 3. The tempo changes are somewhat flexible, and the exact number of beats that elapse during a transition varies from one performance to another. 4. The tempo acceleration is linear, and parallel parts align only at simple mathematical relationships. It is not simple to rigorously define parallel tempo curves that accelerate and decelerate continuously relative to each other, and come into synchronicity at strict predetermined musical points for all voices. In chapter 3, we discuss how existing electronic and acoustic music approaches this challenge, and derive a mathematical solution that unlocks a previously inaccessible genre of polytempic music.

16 HYPERCOMPRESSION Amplified Spatial Music The evolution of polytempic music in the Modernist period was paralleled by innovation in the creation and performance of electronic music. After World War II, new technology became available to composers, and with this new technology came new styles of music. Pierre Schaeffer was among the first composers using electronic sounds together with acoustic ones. He worked at the Radiodiffusion-Tletvision Franqaise (RTF), where he helped to pioneer early practices in musique concrete. With the help of younger composer Pierre Henry, he was also among the first composing spatialized pre-recorded sound. The pair collaborated on a piece called Symphonie pour un Homme Seul (Symphony for One Man Alone, 195). For this piece they created a tetrahedral loudspeaker arrangement and a rather dramatic interface that they called the Pupitre d'espace, shown in figure 2.2. Large hoops on the device had inductive coils that sensed the user's hand position and controlled the signal routing to the loudspeakers. 1 Gesang der Junglinge The success of Schaeffer's music at the RTF attracted the attention of other composers interested in electronic music. Among them was Karlheinz Stockhausen. Stockhausen came to RTF and composed just one 2-track etude in 1952 before returning to West Germany, where he continued to compose orchestral and electronic works. His practice led to the composition of what is widely regarded as the first masterpiece of electronic music, Gesang der Juinglinge (Song of the Youths, ), which was also the first multichannel prerecorded composition to be performed in a concert setting with multiple loudspeakers placed around the audience". The piece stands out by the many aspects in which it is both evolutionary and revolutionary when juxtaposed with the other electronic compositions of the time; the delicate blending of the voice with electronics, and the creative editing of the voice being two examples. It has been extensively analyzed and reviewed in literature;' 2 however, the exact strategy for the spatialization of sound in the original four-channel performance remains somewhat ambiguous. From interviews and essays with Stockhausen, we can gather some insight into his process. In 1955, the year when Stockhausen began work on Gesang der Junglinge, he published an essay on his serial technique. "By regulating the positions of the sources of sound it will be possible for the first time to appreciate aesthetically the universal 1 Thom Holm. Electronic and Experimental Music: Technology, Music, and Culture. Routledge, 28. ISBN Figure 2.2: Pierre Schaeffer with the Pupitre d'es pace in Ina/Maurice Lecardent, Ina GRM Archives " D. J. Grout, J. P. Burkholder, and C. V. Palisca. A History of Western Music. W.W. Norton, 7 th edition, 26. ISBN "Pascal Decroupet, Elena Ungeheuer, and Jerome [Translator] Kohl. Through the sensory looking-glass: The aesthetic and serial foundations of Gesang der Junglinge. Perspectives of New Music, 36(1):97-142, ISSN ; David Joel Metzer. The Paths from and to Abstraction in Stockhausen's Gesang der Junglinge. Modernism/modernity, "(4): , 24. ISSN 1o8o-66o1. DoI: /mod.25.oo12; and Paul Miller. Stockhausen and the Serial Shaping of Space. PhD thesis, University of Rochester, 29

17 HYPERCOMPRESSION 17 realisation of our integral serial technique." 1 3 In an interview published in 1974 he made the following comment on the subject of sound positioning in Gesang der Junglinge: "The speed of the sound, by which one sound jumps from one speaker to another, now became as important as pitch once was. And I began to think in intervals of space, just as I think in intervals of pitch or durations. I think in chords of space." 14 A side effect of serialism is discouraging the uneven distribution of a musical parameter. With this in mind, spatialization is a very natural target for serialism. Given the added creative flexibility of surround sound, it reasonable to search for ways to take full advantage of the new dimension, without favoring any particular direction or loudspeaker. 1 Karlheinz Stockhausen. Actualia. Die Reihe, pages 45-51, Karlheinz Stockhausen and Jonathan Cott. Stockhausen : conversations with the composer. Pan Books, ISBN Figure 2-3: Excerpt from the Gesang der Jinglinge Advances in Surround Panning For his next four-track tape composition, Kontakte (1958-6), Stockhausen devised a new technology that made it quite simple to continuously pan sounds between the speakers orbiting the listener. He used a rotating speaker on a turntable, surrounded by four equally spaced microphones. Stockhausen continued to feature spatialization prominently in both acoustic and electronic

18 HYPERCOMPRESSION 18 work. His major orchestral compositions, Gruppen ( , described in section 2.2) and Carri (for four orchestra and four choirs, ) both prominently feature spatialization. Throughout the rest of the century, advances in technology enabled new performances with more speakers and more complex spatial possibilities. The Vortex multimedia program at the Morrison Planetarium in San Francisco ( ) featured 4 loudspeakers with surround sound panning, facilitated by a custom rotary console, and featured works by Stockhausen, Vladimir Ussachevsky, Toru Takemitsu, and Luciano Berio. The planetarium featured synchronized lighting, which became a hallmark of major surround sound productions of the time. The Philips Pavilion at the 1958 Brussels Worlds' Fair used a custom sequencer hooked up to a telephone switcher to pan sounds between over 3 speakers (more in section 2.4). John Chowning's Turenas, (1972) simulated amplitude changes and doppler shift of sound objects' movements as a compositional element. 15 The West German pavilion at Expo 7 in Osaka, Japan, included a dome 28 meters in diameter, five hours of music composed by Stockhausen, and 2 soloist musicians. Stockhausen "performed" the live three-dimensional spatialization from a custom console near the center of the dome (figure 2.4). The West German dome was not the only massive spatialized sound installation at Expo 7. lannis Xenakis, the mastermind behind the 1958 Philips Pavilion in Brussels, was also presenting his 12-channel tape composition, Hibiki Hana Ma, at the Japanese Steel Pavilion through 8oo speakers positioned around the audience, overhead, and underneath the seats. 1 5John Chowning. Turenas : the realization of a dream 1 Introduction 2 Moving Sound Sources and Score. In Journies d'informatique Musicale Universite de Saint-Etienne, 211 Evolution of Electronic Composition Gesang der Junglinge may have been the first masterpiece of electronic music, but the techniques that were developed at the RTF studios were quickly spreading. Another composer who was drawn to RTF (where musique concrete was first conceived by Pierre Schaeffer) was Pierre Boulez. However, Boulez was generally unsatisfied with his early electronic compositions and frustrated by the equipment required to make electronic music. Despite his general distaste for electronic music composition, Boulez was approached by the French President, Georges Pompidou, in 197 and asked to found an institution dedicated to the research of modern musical practice. The center, IRCAM, opened in 1977 with Boulez at the head. In a 1993 interview, Boulez described how he directed the efforts of the lab: "Back in the 195s, when you were recording sounds on tape and

19 HYPERCOMPRESSION 19 Figure 2.4: Inside the West Greman pavillion at Expo 7. Osaka, 197.

20 HYPERCOMPRESSION 2 using them in a concert, you were merely following the tape, which became very detrimental to the performance. So I pushed the research at IRCAM to examine the use of live electronics, where the computer is created for the concert situation, instantly responding to your actions. The system's language also became easier to follow; I remember when I tried to learn the electronics, it was all figures, figures, figures. These meant nothing at all to the musician. If you have to work in hertz and not notes, and then wait half an hour to process the sounds, you get completely discouraged. My goal was so that the musician could sketch his ideas very rapidly, with instantaneous sound and graphical notation. The use of computers finally brought electronics down to the level of understanding for composers. I feel very responsible for that change."1 6 Boulez' first major composition that took advantage of the resources at IRCAM was Repons which premiered at the Donaueschingen Festival in Germany in 1981 (allthough Boulez continued to revise it until 1984). The piece balances 24 acoustic performers with prerecorded material and live processing with spatialization over a ring of 38 loudspeakers. The audience sits in a circle surrounding the orchestra, while six of the acoustic instrumentalists are spaced around the outside of the audience. Boulez was certainly not the first composer to mix elextronics with acoustic performers (Milton Babbitt's 1964 Philomel is a much earler example), but Ripons does mark a certain maturity of the form. ' 6 Andy Carvin. The Man Who Would be King: An Interview with Pierre Boulez, URL html 2.4 Iannis Xenakis The projects in this thesis build on the work and ideas of Iannis Xenakis. Xenakis studied music and engineering at the Polytechnic Institute in Athens, Greece. By 1948, he had graduated from the university and moved to France where he began working for the French architect, Le Corbusier. The job put his engineering skills to use, but Xenakis also wanted to continue studying and writing music. While searching for a music mentor, he approached Oliver Messiaen and asked for advice on whether he should study harmony or counterpoint. Messiaen was a prolific French composer known for rhythmic complexity. He was also regarded as a fantastic music teacher, and his students included Stockhausen and Boulez. Messiaen later described his conversation with Xenakis: "I think one should study harmony and counterpoint. But this was a man so much out of the ordinary that I said: No, you are almost 3, you have the good fortune of being Greek, of being an architect and having studied special mathematics. Take advantage of these things. Do them in your music." 17 In essence, Messiaen was rejecting Xenakis as a student, but we 17 Tom Service. A Guide to lannis Xenakis's Music, 213. URL theguardian. com/ music/tomserviceblog/213/apr/23/ contemporary- music-guide- xenakis

21 HYPERCOMPRESSION 21 can see how Xenakis ultimately drew from his disparate skills in his compositions. The score for his 1945 composition Metastasis (figure 2.5) resembles an architectural blueprint as much as it does a musical score. J : Z r--lj Ak r7~ The Philips Pavilion In 1956, Le Corbusier was approached by Louis Kalff (Artistic Director for the Philips corporation) and asked to build a pavilion for the 1958 World's Fair in Brussels. The pavilion was to showcase the sound and lighting potential of Philips' technologies. Le Corbusier immediately accepted, saying: "I will not make a pavilion for you but an Electronic Poem and a vessel containing the poem; light, color image, rhythm and sound joined together in an organic synthesis."'8 Figure 2-5: Excerpt from Iannis Xenakis' composition, Metastasis (1954), measures This score in this image was then transcribed to sheet music for the orchestral performance. 18 Oscar Lopez. AD Classics: Expo '58 + Philips Pavilion / Le Corbusier and Iannis Xenakis, 211. URL http: //www. archdaily. com/157658/ad - classics-expo-58- philips- pavilionle- corbusier- and- iannis- xenakis/

22 HYPERCOMPRESSION 22 Figure 2.6: The Philips Pavilion at the 1958 Brussels World Fair as shown in Volume 2 of the Philips Technical Review, Phot. H".s 4e Fl- The final product lived up to Le Corbusier's initial description. It included:' 9 i. A concrete pavilion, designed by architect and composer Iannis Xenakis 2. Interlude Sonoire (later renamed Concret PH), a tape music composition by Iannis Xenakis, approximately 2 minutes long, played between performances, while one audience left the pavilion and the next audience arrived 3- Poeme tlectronique, a three-channel, 8 minute tape music composition by composer Edgard Varese 4. A system for spatialized audio across more than 35 loudspeakers distributed throughout the pavilion 5. An assortment of colored lighting effects, designed by Le Corbusier in collaboration with Philips' art director, Louis Kalff 6. Video consisting mostly of black and white still images, projected on two walls inside the pavilion 7. A system for synchronizing playback of audio and video, with light effects and audio spatialization throughout the experience 19 Vincenzo Lombardo, Andrea Valle, John Fitch, Kees Tazelaar, and Stefan Weinzierl. A Virtual-Reality Reconstruction of Poeme Based on Electronique Philological Research. Computer Music Journal, 33(2):24-47, 29 Role of Iannis Xenakis During the initial design stage, Le Corbusier decided that the shape of the pavilion should resemble a stomach, with the audience entering through one entrance and exiting out another. He completed initial sketches of the pavilion layout and then delegated the remainder of the design to Xenakis. 2 2 Joseph Clarke. annis Xenakis and the Philips Pavilion. The Journal of Architecture, 17(2): , 212. ISSN DOI: 1.18/

23 ' HYPERCOMPRESSION 23 The architectural evolution of the pavilion from Le Corbusier's early designs (figure 6.1) to Xenakis' iterations (figure 2.9), illustrates the profound impact that Xenakis had on the project. Xenakis was aware that parallel walls and concave spherical walls could both negatively impact audio perceptibility due to repeated or localized acoustic reflections. The walls of the pavilion had to accomodate lighting effetcs, which were projected from many different angles, leading him to consider sufaces with a varying rate of curvature. 2 Ruled surfaces such as the conoid and hyperbolic paraboloid, seemed to meet the needs of the project, and also acomodate the acoustical needs. Through this process, we see Xenakis utilizing the skills that he learned at the Polytechnic Institute and continued to develop while working with Le Corbusier. He also understood the mathematical formation of the ruled surfaces that make up the structure. These surfaces even look familiar to the Metastasis score (figure 2.5). In his 1963 book, Formalized Music, Xenakis explicitly states that the Philips Pavilion was inspired by his work on Metastasis. 2-5 Architecture and Music in Space and Time In Formalized Music 2 2, Xenakis describes how developments in music theory mimic equivalent developments in philosophy, mathematics, and the sciences. Plato, for example, believed that all events transpire as determined by cause and effect. While Plato and Aristotle both described causality in their writing, it was not until the 1 7 th century that controlled experiments and mathematics corroborated the theory. 23 Similarly, music theory has historically employed causal rules to describe counterpoint, tonality, and harmonic movement (such as the natural progression of dominant chord to the tonic). Causality was largely used to describe physical phenomena until the 19th century when statistical theories in physics began to include probabilistic notions. 24 Xenakis noticed that more contemporary fields like probability theory generalize and expand on the antecedent theories of causality. Xenakis thought that music composition should naturally follow the progression that physics did, with music theory generalizing and expanding on causal rules that had existed previously. Indeed, starting in the late 19th century and early 2th century, composers like Strauss and Debussy began to bend the existing rules of music theory, composing music that branched away from the causal and tonal theories of the time. With the rise of serialism 2 5 and indeterminate music26, composers such as Stockhausen, Boulez, John Cage, 21 S. Gradstein, editor. The Philips Pavilion at the 1958 Brussels World Fair, volume 2. Philips Technical Review, 1959 Figure 2.7: A ruled surface. For a surface to be considered "ruled" every point on the surface must be on a straight line, and that line must lie on the surface. In Xenakis' time, ruled surfaces were useful in architecture, because they simplified the construction of curved surfaces by using straight beams. 2 Iannis Xenakis. Formalized Music. Pendragon Press, ISBN In 1687, Isaac Newton published Philosophix Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), in which he compiled the 3 laws of motion that set the foundation for the study of classical mechanics. 2 The Maxwell-Boltzmann distribution, which was first derived by James Clerk Maxwell in 186o, describes the probability distribution for the speed of a particle within an idealized gas. For more see stanford.edu/ entries/statphys- statmech/ 25 Serialism is a technique for musical composition in which instances of musical elements (such as pitch, dynamics, or rhythm), are given numerical values. Sequences built from the values are ordered, repeated and manipulated throughout the composition. 26In music, indeterminacy refers to the use of chance (such as rolling dice or flipping coins) as part of the compositional process.

24 - HYPERCOMPRESSION 24,~ 'A A I. A K~L V I, r~. ~ Figure 2.8: Le Corbusier's design sketches for the Philips Pavilion, September - October, 1956 (@ 212 Artists Rights Society, New York/ADAGP, Paris/FLC)

25 HYPERCOMPRESSION 25 3) 6) Vut AC. AVAtOLOTDE..ReLew. - ~ L T~m s~i~ Po~'~( je PUR~A, in D~E LA 7AME probability to*r~ chance in sam ompositiojthe way that hysicistswere usin descrb th1tra1wrd T 3: XenAs mpoind, T serial E mui a escualta h md "smais. Xenakis r' ealy dwisc mui t ene to~ suesee He decie serial music as r musictheoy asa su-seto mthemticsand lgeba:ofilhephilips Pavilio a992 docunte4d93 Aaronansphand, an difren Boakbean toe prsobasbilityeaditvmi2athhlistchia proiite to describe the cmpea world.beasxnks Tonenakisd mindsemal usie as noi le causale ta thentf musi ite inedeitasesmendidetemderbdia music psswr music theoryias a subset of mahebaics andry algebra Whil PedagnPrss92.IBN94, Xenakis wanted to generalize and expand the causal framework that musicians and theorists had been using to compose and

26 HYPERCOMPRESSION 26 understand music, paralleling similar developments in physics and mathematics. As a reference to chance, or stochos, Xenakis coined the term stochastic music to describe his development. Xenakis' book, Formalized Music gives a verbose explanation of stochastic music. Some authors have interpreted his description more explicitly. In Audible Design, Trevor Wishart describes the stochastic process used to compose stochastic music as: "A process in which the probabilities of proceeding from one state, or set of states, to another, is defined. The temporal evolution of the process is therefore governed by a kind of weighted randomness, which can be chosen to give anything from an entirely determined outcome, to an entirely unpredictable one"2'8 Xenakis' Reflection In the Spring of 1976, while defending his doctoral thesis at the University of Paris, Xenakis emphasized the relevance of seemingly unrelated disciplines to the creative process. A translation of his defense includes this statement: "The artist-conceptor will have to be knowledgeable and inventive in such varied domains as mathematics, logic, physics, chemistry, biology, genetics, paleontology (for the evolution of forms), the human sciences, and history; in short, a sort of universality, but one based upon, guided by and oriented toward forms and architectures." 2 9 From Xenakis' drawings we can deduce that he used the same tools, skills, and philosophy to imagine and conceive both music and architecture. His approach elevated both forms and blurred the distinction between the two. Perhaps if we had kept using pen and paper to design buildings and write music, the reality today would be closer to the ideal that he imagined. As the ideas that inspired Xenakis and other progressive 2th century composers were taking root in contemporary music, the culture of artistic form and composition was already beginning the transition into the digital domain. There is no reason why digital tools cannot favor stochastic processes to linearity, but software for composing music tends to favor static pitches to glissandi, while software for architectural design tends to favor corners to curves. This is where the projects described here make a contribution. By drawing from music, mathematics, computer science, acoustics, audio engineering and mixing, sound reinforcement, multimedia production, and live performance, we can create tools that allow us to indiscriminately compose with space and sound. 28 Trevor Wishart. Audible Design. Orpheus The Pantomime Ltd., ISBN , L Russolo. The Art of Noises. Monographs in musicology Pendragon Press, ISBN

27 3 Temporal Domain: Stochastic Tempo Modulation One composer writing rhythmically complex music during the 2th century was Elliott Carter. Carter developed a technique he called tempo modulation, or metric modulation in which his music would transition from one musical meter to another through a transitional section that shared aspects of both. While metric modulation is a technique for changing meter, and Stochastic Tempo Modulation is a technique for changing tempo, the former led to the later in a surprising way. Carter's reputation for complexity in music attracted the attention of composer and cellist Tod Machover. While Machover was studying with Carter, he wrote a trio for violin, viola, and cello, in which each instrument would accelerate or decelerate relative to the others. The piece turned out to be so difficult that it was impossible to find anyone who could play it correctly. Faced with this challenge, Machover saw opportunity: "A sort of lightbulb went off... computers are out there, and if you have an idea and can learn how to program, you should be able to model it." 1 If the music is too complex for a human to process, but we can define it formulaically, we can teach a computer to play sounds that a human cannot. Stochastic Tempo Modulation builds on this idea with inspiration from Xenakis. 1 Evan Fein. Q&A With Tod Machover, 214. URL edu/journal/142/hyperinstruments - crowd- sourced- symphonies 3.1 Stochos In chapter i (see figure 2.5) we saw how Xenakis used ruled surfaces in his composition to compose swarms of notes that move together, creating stochastic sonorities. The goal of Stochastic Tempo Modulation is to enable composition with swarms of tempo modulations that move in correlated, cohesive patterns. Music with two or more simultaneous tempos (polytempic music) is itself not a new concept; many examples exist, 2 and were 'John Greschak. Polytempo Music, An Annotated Bibliography, 23. URL ptbib.htm

28 HYPERCOMPRESSION 28 described in chapter 2. Less common is polytempic music where continuous tempo accelerations or decelerations are defined relative to each other. This style of music is well-suited to tape music, because tape machines can play recordings back at variable rates. However, it is difficult to control the exact point (or phase) when de-synchronized tape becomes re-aligned. Performative music with simultaneous tempi that accelerate and decelerate relative to each other is unusual, but does exist. In a 1971 interview composer Steve Reich described how he made the transition to performative polytempic music after working on his tape music composition, Come Out: "1966 was a very depressing year. I began to feel like a mad scientist trapped in a lab: I had discovered the phasing process of Come Out and didn't want to turn my back on it, yet I didn't know how to do it live, and I was aching to do some instrumental music. The way out of the impasse came by just running a tape loop of a piano figure and playing the piano against it to see if in fact I could do it. I found that I could, not with the perfection of the tape recorder, but the imperfections seemed to me to be interesting and I sensed that they might be interesting to listen to." 3 Reich's experience illustrates what other composers and performers have also encountered: It is quite difficult to perform polytempic music accurately. In Piano Phase, Reich has two performers playing the same 12 tone series on the piano. After a set number of repetitions through the pattern, one performer begins to play slightly faster until she is exactly one note ahead of the other performer, at which point both performers play at the same rate for a time. This process is repeated and iterated on, creating a live phasing effect without the pitch shifting that would occur when phasing analog tape. If we compare a live performance 4 with a programatic rendering 5 of Piano Phase, we can hear how the latter is able to accelerate more smoothly. The programatic example spends longer on the transitions where the two parts are out of phase. 3 Michael Nyman. Steve Reich. The Musical Times, 112: , Tine Allegaert and Lukas Huisman. Reich: Piano Phase, URL www. youtube. com/watch?v=i345c6znfm 5 Alexander Chen. Pianophase.com, 214. URL https: //vimeo. com/ Objective Steve Reich composed Piano Phase for two performers. Through experimentation, he found that if the music is reasonably simple, two performers can make synchronized tempo adjustments relative to each other well enough to yield compelling results. Stochastic Tempo Modulation allows us to write music with many more simultaneous tempi. However, the requirements are probably too demanding for unassisted performers. Our goal is to compose and audition music where:

29 HYPERCOMPRESSION 29 i. Swarms of an arbitrary number of simultaneous tempi coexist. 2. Each individual player within the swarm can continuously accelerate or decelerate individually, but also as a member of a cohesive whole. 3. Each musical line can converge and diverge at explicit points. At each point of convergence the phase of the meter within the tempo can be set. We start by defining a single tempo transition. Consider the following example (shown in figure 3-1): * Assume we have 2 snare drum players. Both begin playing the same beat at 9 BPM in common time. " One performer gradually accelerates relative to the other. We want to define a continuous tempo curve such that one drummer accelerates to 12 BPM. * So far, we can easily accomplish this with a simple linear tempo acceleration. However, we want the tempo transition to complete exactly when both drummers are on a downbeat, so the combined effect is a 3 over 4 rhythmic pattern. Linear acceleration results in the transition completing at an arbitrary phase. " We want the accelerating drummer to reach the new tempo after exactly 2 beats. " We also want the acceleration to complete in exactly 16 beats of the original tempo, so that the drummer playing a constant tempo and the accelerating drummer are playing together. Begin Tempo Ramp Arrive at 12 BPM After Exactly 2 Beats..* Static Beats.* ChangingBeats * e Tine, Measured in Beats at 9 BPM Figure 3.1: Tempo transition from 9 BPM to 12 BPM 3.3 Solution We are interested in both the number of beats elapsed in the static tempo and in the changing tempo, as well as the absolute tempo.

30 HYPERCOMPRESSION 3 If we think of the number of beats elapsed as our position, and the tempo as our rate, we see how this resembles a physics problem. If we have a function that describes our tempo (or rate), we can integrate that function, and the result will tell us our number of beats elapsed (or position). Given the above considerations, our tempo curve is defined in terms of 5 constants: " Time to =, when the tempo transition begins * A known time, t 1, when the tempo transition ends * A known starting tempo: to " A known finishing tempo: t 1 * The number of beats elapsed in the changing tempo between to and t 1 : x, The tension of the tempo curve determines how many beats elapse during the transition period. The curve is well-defined for some starting acceleration ao and finishing acceleration a,, so we define the curve in terms of linear acceleration. Using Newtonian notation we can describe our tempo acceleration as: = -1 ao + alt 1 (3-1) Integrating linear acceleration (3.1) yields a quadratic velocity curve (3.2). The velocity curve describes the tempo (in beats per minute) with respect to time.,t = to + We must specify the same time units for input variables like t 1 and x1. I prefer a,12 minutes for t 1 and beats per minute for xj aot1 - (3.2) over seconds and beats per second. Integrating velocity (3.2) gives us a function describing position (the number of beats elapsed with respect to time). x1 = XO + tot '3) With equations (3.2) and (3.3), we can solve for our two unknowns, ao and a 1. First we solve both equations for a 1 : Assuming ti at2 a,=2 (,to aoti) = t(o - x 1 aot) 1 1, we solve this system of equations for ao: 6x 1-2t 1 ( i + 2:t) ao = t2(34 1 Evaluating (3.4) with our constants gives us our starting acceleration. Once we have ao we can solve (3.2) for a,, and evaluate (3.2) with a 1 and ao to describe our changing tempo with respect to time.

31 HYPERCOMPRESSION Stochastic Transitions Equipped with the equations from the previous section, it becomes quite simple to create swarms of parallel tempos that are correlated and complex. In figure 3.2, we build on the previous example. Here, each additional tempo curve is calculated the same way, except for x 1 (number of beats in our accelerating tempo during the transition), which is incremented for each additional tempo line. Many commercial and research projects deal with different ways to manipulate rhythm and tempo. Flexible digital audio worksta- :!- I- bjn Begin Tempo Ramp from 9 BPM S S S S S S S S S S S S S S S-, Time, Measured in Beats at 9 BPM Tempo Acceleration Complete at 12 BPM S S 16 * * O * * * 5 * 2 This pattern clearly exhibits controlled chance that Xenakis would describe as stochastic. On the very first beat at t =, all parallel parts are aligned. Beats 2 and 3 can be heard as discrete rhythmic events, but become increasingly indistinct. The end of beat 4 then overlaps with the start of beat 5, before articulated beats transition to pseudo random noise. By beat 13 of the static tempo, the chaos of the many accelerating tempi begin to settle back into order before returning to complete synchronicity at t = 16. Figure 3.2: Stochastic Tempo Transition from 9 BPM to 12 BPM. Black dots are beats in our changing tempi. Grey dots show a continuation of beats at the initial tempo. 12 < x Tide: Composition with Stochastic Tempo Modulation An earlier version of the equation derived here was packaged as a patch for the Max 6 graphical programming language. Composer Bryn Bliska developed a user interface and used it in the composition of Tide. 7 While this ealier version of the equation did not support a variable x 1 parameter, Tide uses Stochastic Tempo Modulation to drive the phasing tempos of the bell-like synthesizers throughout the piece for two to three simultaneous tempi Available online: mit.edu/-holbrow/mas/tidebliska_ Holbrow.wav 3.6 Recent Polytempic Work

32 HYPERCOMPRESSION 32 tions (DAWs) like Cockos Reaper 8 and MOTU Digital Performer 9 include features for auditioning tracks or music-objects with unique simultaneous tempi, and individual tempos can even be automated relative to each other. However, the precise nonlinear tempo curves that are required for the syncopated musical content to synchronize correctly after a transition completes are not possible in any DAW we tried. Audio programming languages like Max and SuperCollider'o could be used to create tempo swarms, but require equations like the ones defined in section 3.3. One project, Realtime Representation and Gestural Control of Musical Polytempi" demonstrates an interface for generating Polytempic music, but is not intended or capable of generating coordinated or stochastic tempi swarms. The Beatbug Network 2 is described as a multi-user interface for creating stochastic music, but is focused on "beats," or musical rhythmic patterns, and timbres, rather than tempi. Stochos1 3 is a software synthesizer for generating sound using random mathematical distributions, but is also not designed to work with simultaneous tempos or even as a rhythm generator. Finally, Polytempo Network1 4 is a project that facilitates the performance of polytempic music, but does not aid the composition thereof. 8 reaper.fm 9 com/products/ softwa re/dp Chris Nash and Alan Blac. Realtime Representation and Gestural Control of Musical Polytempi. In New Interfaces for Msical Expression, pages 28-33, Genova, Italy, 28 " Gil Weinberg, Roberto Aimi, and Kevin Jennings. The Beatbug Network - A Rhythmic System for Interdependent Group Collaboration. In Proceedings of the International Conference on New Interfacesfor Musical Expression, pages , Dublin, Ireland, 22. URL http: //w. nime. org/proceedings/ 22/nime22_186.pdf 1 Sinan Bokesoy and Gerard Pape. Stochos: Software for Real-Time Synthesis of Stochastic Music. Computer Music Journal, 27(3):33-43, 23. ISSN DOI: 1o.1162/ Philippe Kocher. Polytempo Network: A System for Technology-Assisted Conducting. Proceedings of the International Computer Music Conference, 163 (September): , 214

33 P- CP.s. s so a..b... ' s. PQ.. c P4.

34 4 Spatial Domain: Reflection Visualizer It was Xenakis' goal for the curved surfaces of the Philips Pavilion to reduce the sonic contribution of sound reflections as much as possible.' He knew that reflections and the resulting comb filtering could impair intelligibility and localization of music and sounds. The pavilion was to have hundreds of loudspeakers, and large concave surfaces like the ones on the inside of the pavilion can have a focussing effect on acoustic reflections, resulting in severe filtering and phase cancellations. 2 If Xenakis had been able to model the reflections and compose them directly into the piece, what would the tools be like, and how would his architectural spaces be different? The Xenakis inspired Reflection Visualizer is an abstract software tool for experimenting with architectural acoustic lenses. It is intended more as an experiment for architectural or musical brainstorming, than as a simulation for analysis of sound propagation. For example: 1 S. Gradstein, editor. The Philips Pavilion at the 1958 Brussels World Fair, volume 2. Philips Technical Review, Martijn Vercammen. The reflected sound field by curved surfaces. The Journal of the Acoustical Society of America, 123(5):3442, 28. ISSN DOI: / It illustrates sound projection in only two dimensions. 2. It is frequency independent. Real sufaces reflect only wavelengths much smaller than the size of the reflector Diffraction is ignored. 4. Acoustic sounds waves of higher frequencies propagate more directionally than lower frequencies. This property is ignored. 3 Zhixin Chen and Robert C. Maher. Parabolic Dish Microphone System, 25. URL http: //www. coe.montana. edu/ee/ rmaher/publications/maher_ aac-.85.pdf 4.1 Implementation The Reflection Visualizer was implemented as a web app using the HTML 5 Paper.js 4 vector graphics library. Try Reflection Visualizer online at Click and drag on any black dot to move the object. Black dots connected by grey lines are handles that re-orient (instead of move) objects. 4

35 HYPERCOMPRESSION 35 On reflection surfaces, the handles adjust the angle and shape of the surface curve. Handles connected to sound sources adjust the angle and length of the sound beams. Figure 4.1: Reflection Visualizer user interface. S ID 4.2 Reflection Visualizer Architectural Example Assume we are creating the floor plan for a new architectural space and accompanying electronic music performance. The music piece incorporates spatial features in the style of SOUND=SPACE by Rolf Gehlhaar: Our audience moves through the performance space, and as they move, the sound changes, making the music experience unique to every visitor. We would like to use acoustic reflections to manipulate the sound in space, such that at certain points the sound is focussed on the lister. When we hear an acoustic sound reflection off a concave surface, the sound can arrive at our ears in two possible states: 1. The path of the sound from the source to the reflecting surface to our ears is equidistant for each point on the reflecting surface. Ignoring any direct sound, the reflection arrives in phase, and the surface acts as acoustic amplifier of the reflection. 2. The path of the sound from the source to the reflecting surface to our ears is slightly different for each point on the surface. All the reflections arrive out of phase with each other. We can use the Reflection Visualizer tool to prototype potential layouts and gain some intuition about our focal points. The curved

36 HYPERCOMPRESSION 36 Figure 4.2: Reflections from a 3 loudspeaker arriving out of phase. black line in the user interface (figure 4.1) represents a reflective surface. The black dot with emanating red lines represents a sound source and the red line represent sound propagation. Each red line emanating from a sound source is the same length, no matter how many times it has been reflected. If it is possible to adjust the length of the red lines such that each one ends at the same spot, it shows that reflections will arrive at that spot in phase. Figures 4.2 and 4.3 show how we can adjust the curve of a surface to focus reflections on a point.

37 HYPERCOMPRESSION 37 Figure 4-3: By adjusting the curvature of the reflective surface, we can focus the audio reflections. Figure 4.4: A musical composition. The red emanating lines can also be thought of as stochastic pitch swarms, similar to those Xenakis wrote for Metastasis in 1954 (figure 2-5). 1

38 HYPERCOMPRESSION Il 1.I Figure 4.5: The Reflection Visualizer.

39 5 The Hypercompressor The inspiration for the Hypercompressor came during the development of Vocal Vibrations, an interactive music installation about the human voice and about engaging the public in singing. 1 The project featured a Music Concrete composition, The Chapel by Tod Machover, which was mixed in a io-channel surround sound format and played throughout the installation. During the mixing process, I discovered an important surround sound tool missing from my mixing workflow. When mixing in mono or stereo, audio compression lets us meticulously shape and balance sounds in time. I found myself wishing I could shape and position sounds in space just as easily. 1 Charles Holbrow, Elena Jessop, and Rebecca Kleinberger. Vocal Vibrations: A Multisensory Experience of the Voice. Proceedings of the International Conference on New Interfaces for Musical Expression, pages , 214. URL http: // 214/nime214_378. pdf Unless noted otherwise, "compression" is used in this thesis to describe dynamic range compression, as opposed to data compression. 5.1 Building on the Compression Paradigm The design, implementation, and use of traditional dynamic range compression is well-documented in the literature, 2 so we will describe dynamic range compression only to the extent that needed to explain the foundation for Hypercompression. Imagine we are mixing a vocal pop performance, and during the verse our vocalist sings moderately loud, or mezzo-forte. At the beginning of the chorus, our singer wants a full and powerful sound, so she adjusts the dynamic to very loud, orfortissimo; however, the new louder dynamic interrupts the balance between the vocals and the other instruments in our mix. We like the powerful sound of our singer's fortissimo performance, but our balance would be improved if we had the volume of a forte performance instead. One option is to manually turn down the vocalist during the chorus, which in some cases this is the best solution. When we want more precise control, we can use a compressor. 2 Dimitrios Giannoulis, Michael Massberg, and Joshua D Reiss. Digital Dynamic Range Compressor Design - A Tutorial and Analysis. Journal of the Audio Engineering Society, 6o (6):399-4o8, 212; Alex Case. Sound FX: Unlocking the Creative Potential of Recording Studio Effects. Focal Press, 27. ISBN 978-o ; and Emmanuel Deruty, Francois Pachet, and Pierre Roy. Human-Made Rock Mixes Feature Tight Relations. Journal of the Audio Engineering Society, 62(1), 214

40 HYPERCOMPRESSION 4 Traditional Compression A compressor is essentially an automated dynamic volume control. Most compressors include at least four basic parameters in the user interface that allow us to customize its behavior: threshold, ratio, attack time, and release time. We can send our vocalist's audio signal through a compressor, and whenever her voice exceeds the gain level set by our threshold parameter, the signal is automatically attenuated. As the input signal further exceeds the threshold level, the output is further attenuated relative to the input signal. The ratio parameter determines the relationship between the input level and output level as shown in figure 5.1. Threshold and ratio settings are essential for controlling dynamic range, but the power and creative flexibility of the compressor comes with the attack time and release time parameters. These parameters determine the speed at which the compressor attenuates (attack time) and disengages (release time) when the input signal exceeds the threshold. By adjusting the attack and release times, we can change the temporal focus of the compressor. Consider the following examples: Output Level (db) Threshold Input Level (db) 1: Figure 5.1: "Compression ratio" by Iain Fergusson. Licensed under Public Domain via Wikimedia Commons org/wiki/ File: Comp ression-ratio.svg#/media/ File: Compression-ratio.svg Perhaps we want the compressor to engage or disengage at the time scale of a musical phrase. We could set our attack time long enough to let transients through without engaging the compressor significantly (try 2 milliseconds). If our release time is quite long (try 3 milliseconds), and we set our threshold and ratio carefully, we might be able to convince the compressor to smooth musical phrases. * If we want our compressor to focus on syllables instead of phrases, we can shorten our attack and release times (try 1 milliseconds and 4 milliseconds respectively). When the compressor engages and disengages at each syllable, it imparts a different quality (sometimes described as "punch"). * If we reduce our attack and release parameters enough, we can instruct our compressor to engage and disengage at the time scale of an audio waveform, compressing individual cycles. This will distort an audio signal, adding odd order harmonics, 3 and imparting an entirely different quality. The attack and release times listed here are a rough guide only. The exact function of these parameters varies from one model of compressor to another, and results also depend on the audio input material, as well as the threshold and ratio settings. The results of audio compression can sometimes be characterized better by a feeling than a formula. 3 Not every compressor model can react quickly enough to distort a waveform. The Dbx 16o and Teletronix LA2A are known to be fast enough to distort.

41 HYPERCOMPRESSION 41 Auiori' Innut d:ny - Audio Output ulevel Gain Side-chain Detector Control --. Audio Signal --- Control Signal - User Interface Threshold J Ratio Attack Time : Release Time: Side-Chain Compression Compressors often have an additional operational mode that is the primary inspiration for Hypercompression. As previously discussed, traditional compressors automatically reduce the gain of a signal that exceeds a given threshold. Some compressors allow us to attenuate the level of a signal when a different signal exceeds the threshold level. Models that suports side-chain compression have a second audio input. When we switch the compressor into side-chain mode, the compressor attenuates the first signal only when the second signal exceeds the threshold. Side-chain compression is often used to moderate the balance of kick drum and bass guitar. If the bass guitar is briefly attenuated just enough each time the kick drum hits, we can set the kick and bass guitar at exactly the gain levels we want without one masking the other. Because the bass guitar is only briefly attenuated, it will not be perceived as any quieter. In this example, we use the kick drum to create a gain envelope for our bass guitar. The kick pushes the bass to make room for itself. The attack time and release time parameters give control over this behavior in the temporal domain. The next step is to expand this model to add control in the spatial domain. Figure 5.2: Block diagram of a simple monophonic dynamic range compressor. 5.2 Ambisonics Ambisonics is a technique for encoding and decoding threedimensional surround sound audio. 4 Ambisonic audio differs from discrete-channel surround sound formats such as 5.1 and 4 Michael Gerzon. Periphony: With- Height Sound Reproduction. Journal of the Audio Engineering Society, 21(1):2-1, 1973; and Michael Gerzon. Ambisonics in Multichannel Broadcasting and Video. Journal of the Audio Engineering Society, 33: , ISSN 47554

42 HYPERCOMPRESSION , in that it does not depend on a particular speaker configuration. An ambisonic recording can be decoded on many different surround speaker configurations without disarranging the spatial contents of the audio recording. Imagine we use an omnidirectional microphone to record an acoustic instrument at a sample rate of 44.1 khz. We sample and record 4J41oo samples every second that represent the air pressure at the microphone capsule during the recording. Our omni-directional microphone is designed to treat sound arriving from all angles equally. Consequently, all acoustical directional information is lost in the process. If we want to encode, decode, transmit, or play audio that preserves full sphere 36 degree information, ambisonics offers a solution. Ambisonic audio uses spherical harmonics to encode surround sound audio that preserves the direction-of-arrival information that discrete channel recordings (such as mono and stereo) cannot fully capture. Spherical Harmonics We know that we can construct any monophonic audio waveform by summing a (possibly infinite) number of harmonic sine waves (Fourier series). 5 For example, by summing odd order sine harmonics at a given frequency f, (1f, 3f, 5f, 7f,...), we generate a square wave with fundamental frequency f. As the order increases, so does the temporal resolution of our square wave. By summing sinusoidal harmonics, we can generate any continuous waveform defined in two dimensions (one input parameter and one output). Similarly, by summing spherical harmonics, we can generate any continuous shape defined over the surface of a threedimensional sphere (two input parameters, or polar angles, and one output). Where a traditional monophonic audio encoding might save one sample 44ioo times per second, an ambisonic encoding would save one sample for each spherical harmonic 441oo times per second. This way we capture a three-dimensional sound image at each audio sample. The number of spherical harmonics we encode is determined by our ambisonic order. As our ambisonic order increases, so does the angular resolution of our result on the surface of the sphere. 5 An excellent description of the transformation between the time domain and frequency domain can be found at com/ articles/an-interactive-guide-tothe- fourier- transform/ Spherical Harmonic Definition For encoding and decoding ambisonics, the convention is to use the real portion of spherical harmonics as defined in equation 5.1, where:

43 HYPERCOMPRESSION 43 " Yn' (p, 6) is a spherical harmonic that is: - of order, n - of degree, m - defined over polar angles (qp, i) " NIMI is a normalization factor. 6 " PImI is the associated Legendre function of order n and degree n- ( ) m\pml sin Im p, for m < Yn(Tp, ) = Nll Pn (sin 6) ncos jmj~p, for m;> (5-1) Some literature on spherical harmonics swaps the names of order and degree. In this thesis we use Yder. In literature where yorder degree is used, the function of the subscript and superscript remain unchanged; only the names are inconsistent. 6 In ambisonic literature (and software), there are multiple incompatible conventions for the normalization of spherical harmonics. The Hypercompressor uses the Furse-Malham (FuMa) normalization convention. ( Y"'(T,6)nm(t) Given equation 5.1, we can define an ambisonic audio recording as: N n f (<p,,t) = z n= m=-n Where: (5.2) * qp and 6 describe the polar angle of sound arrival in two dimensions. 7 7 Note that ambisonics uses polar angles to describe the angle of arrival St is time of sound. These are similar to spherical coordinates, minus the inclusion of * nm (t) are our expansion coefficients, described below. ta anc s f e s not pas of Figure 5.3: Spherical harmonics th order (top row) through 3rd order (bottom row). This for image shows the output of Yn'"(qp, 9) for n =, n = 1, n = 2, and n = 3. The distance of the surface from the origin shows the value at that angle. Darker blue regions are positive, while lighter yellow regions are negative. Image credit: Ingo Quilez, licensed under Creative Commons Attribution-Share Alike 3. Unported.

44 HYPERCOMPRESSION 44 Spherical Harmonic Expansion Coefficients In our monophonic recording example, we save just one digital sample 441 times per second, with each saved value representing the air pressure at a point in time. We know that by summing the correct combination of spherical harmonics, we can describe any continuous function over the surface of a sphere. Instead of sampling air pressure directly, we sample a coefficient describing the weighting of each spherical harmonic 441 times per second. The resulting sphere encodes the pressure including the direction of arrival information. The weighting coefficients or expansion coefficients are recorded in our audio file instead of values representing air pressure directly. Now, by summing together our weighted spherical harmonics, we can reconstruct the fluctuations in pressure, including the angle of arrival information. We can recall this snapshot of information at our 44.1 khz audio sample rate. Ambisonic Encoding There are two ways to create an ambisonic recording. First, we can use a soundfield microphone to record an acoustic soundfield. Soundfield microphones, like the one developed by Calrec Audio, can capture angle of arrival information with the spatial resolution of first order ambisonics. 8 Alternatively, we can algorithmically encode pre-recorded sources, creating virtual sources in an ambisonic bus. 9 8 Ken Ferrar. Soundfield Microphone: Design and development of microphone and control unit, URL http: // wireless-world-farrar pdf 9 D G Malham and A Myatt- 3 -D sound spatialization using ambisonic techniques. Computer music journal, 19 (4):58-7, ISSN DOI: 1.237/ Ambisonic Conventions used for Hypercompression This thesis follows ambisonic convention for describing axis of rotation: The x-axis points forward, the y-axis point left, and the z-axis points up. Polar angles are used to describe orientation with ' azimuth being forward, and increasing as we move to the right. ' elevation also points forward and increases as we move upward, with 9 being straight up along the z-axis. When working with ambisonics, multiple incompatible conventions exist for ordering and normalizing spherical harmonics.' The Hypercompressor uses Furse-Malham normalization (FuMa)", and first order ambisonics with B-format" channel ordering. B- format ordering labels the four first order ambisonic channels as W, X, Y, and Z, with W being the spherical harmonic of order zero and degree zero, and X, Y, and Z being the pressure gradient components along their respective axes. 1 Christian Nachbar, Franz Zotter, Etienne Deleflie, and Alois Sontacchi. AMBIX - A Suggested Ambisonics Format. In Ambisonics Symposium 211, 211 " D G Malham. Higher order Ambisonic systems, 23. URL h t tp: // pdf 2 Florian Hollerweger. An Introduction to Higher Order Ambisonic, 28. URL pdf

45 HYPERCOMPRESSION Hypercompressor Design Ambisonic Input - 4o * Audio Output Ambisonic Side-chain Input Threshold 1 Ambisonic I transform 'matrix Level - -over Gain Detector Control Overage polar angles p, Audio Signal -Control -- Signal -- User Interface Threshold Ratio: Attack Time: Release Time: The Hypercompressor (or ambisonic compressor) combines the traditional model of compression with the surround sound capability of ambisonics. Given ambisonic input, and an optional ambisonic side-chain input, the ambisonic compressor is intended to process input material in one of two modes: 1. Standard mode: We set a compression threshold, similar to on a traditional compressor. When a region in our surround sound input material exceeds the set threshold, the compressor engages and attenuates only that region. Figure 5.4: Hypercompressor block diagram 2. Side-chain mode: This mode takes advantage of a second ambisonic input to our signal processor. When the gain of spatial region in our secondary input exceeds our threshold, we attenuate that same region in the the main input, and output the results. In both modes, our ambisonic compressor must attenuate and then release attenuation according to the attack time and release time parameters. The block diagram for the Hypercompressor (figure 5.4) can remain mostly unchanged from the the block diagram for our traditional compressor in figure 5.2. The most important changes are: * Our audio signals must be updated to handle encoded ambisonics. This is as simple as increasing the number of channels on each solid black connection in figure 5.2. The hypercompressor works with first order ambisonics, so every audio path must carry four audio channels.

46 HYPERCOMPRESSION 46 " On a traditional compressor, the level detector only needs to detect the difference between the gain of the input signal and the gain specified by the threshold parameter. Our ambisonic level detector needs to decode the incoming signals and identify both a threshold overage and the region where the overage occurred. " Our gain control module needs to listen to the input coming from the level detector module and be able to attenuate the specific regions that exceed our threshold parameter. Level Detection Module In Spatial Transformations for the Alteration of Ambisonic Recordings, Matthias Kronlachner describes one approach for making a visual ambisonic level meter:13 1. Choose a series of discrete points distributed on the surface of a sphere. Ideally the points are equally distributed, so the vertices of platonic solid shapes like the dodecahedron (12-sided polyhedron) and icosahedron (2-sided polyhedron, figure 5.5) work well. For spatial accuracy, Kronlachner recommends a spherical t-design with 24 points described by Hardin and Sloane Evaluate each spherical harmonic at every point chosen. Cache the results in a matrix. 3. With the cached spherical harmonics, it is then possible to calculate the root mean square (RMS) and peak values more efficiently at the audio rate. 13 Matthias Kronlachner. Spatial Transformations for the Alteration of Ambisonic Recordings. Master's thesis, Graz University of Technology, 214a 4 R. Hardin and N. Sloane. McLaren's Improved Snub Cube and Other New Spherical Designs in Three Dimensions. Discrete Computational Geometry, 15: , A level meter does not need to refresh the display at the audio sample rate, so it is acceptable to interpolate between the points on the sphere and update the graphical representation at the control rate, which could be as slow as 3 Hz (approximately every 33 milliseconds). A similar approach can be used to make an ambisonic level detector; however, a compressor needs to react much more quickly than a level meter. The compressor cannot even begin to engage until the level meter has responded, and attack times faster than 33 milliseconds are common in conventional compression. Every point on the sphere requires a buffer to calculate the RMS. We also need to decode ambisonics at the audio sample rate and keep track of peak values. Ideally we would also interpolate between the points. Figure 5-5: An icosahedron

47 HYPERCOMPRESSION 47 2 Figure 5.6: Calculation of the cylindrical 2 projection of a single ambisonic panned -source in the Wolfram Mathematica software package An Efficient Level Detection Module The Hypercompressor needs to detect the level of our ambisonic input material and identify (as quickly as possible) when and where the signal exceeds the compressor threshold. In the interest of computational efficiency, the first level detector I wrote attempted to extract overage information with minimal ambisonic decoding and signal processing. 1. In this level detector, we calculate the RMS average at the center of six lobes corresponding to the first order spherical harmonics: front, rear, left, right, top, and bottom. 2. Calculate a map of the influence of each lobe on the surround sound image (figures 5.6, 5-7). For example, pan a monophonic sound directly forward in an ambisonic mix, cache an image of the resulting sound sphere. Save one image for each of the six lobes. 3. We have six images, each representing one of the six lobes of our first order ambisonic spherical harmonics. In step :, we calculated the RMS level at each of the corresponding points on our surround sphere. Use the six RMS levels to weight each of our six maps. The sum of the weighted maps shows the gain distributed across our ambisonic sphere. Ambisonic Efficient Level Detection Module Results If the input to the level detector is encoded as an ambisonic plane wave, this level detector does yield accurate results. In the more common case,

48 HYPERCOMPRESSION 48 Figure 5.7: Influence maps of three first order spherical harmonics: left, top, and front. Pure white is - dbfs, black is -inf dbfs. Cylindrical projection. when our ambisonic input material contains multiple sources that are each ambisonically panned to different positions, this interpolation technique does not accurately calculate the RMS at any angle. In simple cases, where we can be sure our input material is appropriate, the technique described here might be useful. For greater spatial resolution (at the expense of performance) the approach described in 5.4 will be more effective. ALn PaLevl 4irn 4oudt. AmbsoicLeelDeecorE dij Figure 5.8: The Hypercompressor visualizer written for the efficient ambisonic level detector. The surround sphere is projected to a cylinder and unwrapped on the flat surface. In this image, a monophonic source is panned slightly down and to the right (45' azimuth, -45' elevation). Ambisonic Gain Control Module The spherical harmonics defined in equation 5.2 form a set of orthogonal basis functions. If we define a sequence for our spherical harmonics and spherical harmonic expansion coefficients, we can treat a set of expansion coefficients as a vector, and perform matrix operations on them that rotate, warp, and re-orient our threedimensional surround sound image. 15 The ability to mathemat- 1 Hannes Pomberger and Franz Zotter. Warping of 3 D Ambisonic Recordings. International Symposium on Ambisonics and Spherical Acoustics, 3, 211

49 HYPERCOMPRESSION 49 ically warp and manipulate our surround sound image makes ambisonics the perfect choice for implementing a surround sound compressor. The Focus Transform One transform that lets us attenuate a region of the surround sound sphere is the focus transform distributed as part of the open source Ambisonic Toolkit (ATK).1 6, Joseph Anderson. Introducing... / 1 1 sin(w) the Ambisonic Toolkit. In Ambisonics 1+sin 1wl F 1+sinwI Symposium, 29 sin(w) 1 F(w) I - +sinfwt 1+sinlwl Ios(W) (5-3) I i _ cos(w) 1+sinlwl This transform is intended to focus attention on the region directly in front of the listener (' azimuth, ' elevation), by attenuating the region in the opposite direction and gently warping the surround sound image toward the front. w is a value between and i radians and specifies the intensity of the transformation. When w =, the surround field is unchanged. When w = sounds panned hard to the rear are muted, sounds panned to the left and right are attenuated by 6dB, the entire surround sound image is warped to the front, and the gain facing forward is unchanged. This enables us to push one sound out of the way in order to make room for another sound, as described in section 5.4. Equation 5.3 attenuates the region behind the listener. If we want to attenuate a region other than the rear, we can rotate F using a rotation matrix like the one below. 1 Rz (p) cos(p) sin(p) -sin(p) cos(p) Equation 5.4 (from the ATK) describes a rotation around the z-axis, by p radians. To rotate the focus transform to the right instead of to the front, we first apply the focus transform to the inverse of a 9 right rotation. Then we apply the 9 matrix to the result. This example is generalized by: X(w, p, 6) = Rz(p)Ry(6)F(w)R 1 ((6)Rz 1 (p) (5-5) Equation 5.5 lets us programmatically generate an ambisonic focus transform matrix that targets a specified region of the surround field, fulfilling the objectives for our ambisonic gain control module in the Hypercompressor.

50 HYPERCOMPRESSION 5 Ambisonic Gain Control Module Results The focus transform lets us warp the surround field, pushing the field to make room for new sounds. In some cases (for example, when mastering an ambisonic recording), warping the surround sound image is undesirable, and a simple directional gain transform should be used instead (an appropriate transform is defined elsewhere18). However, the goal of the Hypercompressor is not to compress dynamic range like a traditional compressor. The goal is to compress space. The focus transform is a compromise: We partly attenuate a region, but we also bend the surround sound image so that important parts of our surround texture are panned to a position with fewer competing sounds. This is an effect that is not possible with a traditional compressor. The focus transform also ties attenuation amount to attenuation radius. If we use only a single focus transform, it is not possible to only slightly attenuate a large region of the surround sound field. The following chapter describes how we used this to our advantage during the live performance of De L'Experience. 18 Matthias Kronlachner. Warping and Directional Loudness Manipulation Tools for AmbisoniCS, 21 4 b. URL wp- content/uploads/213/1/eaa_ 214_Kronlachner-Zotter. pdf

51 6 De L'Experience De L'Experience is a composition by Tod Machover in eight sections for narrator, organ, and electronics. The piece was commissioned by the Orchestre Symphonique de Montreal (OSM) and premiered at the Maison Symphonique de Montreal on May 16th, 215. The text for the piece was taken from the writings of Michel de Montaigne, the 16th century philosopher known for popularizing the essay form. Performers included Jean-Willy Kunz, organist in residence with the OSM, and narrator Gilles Renaud. A recording of the performance has been made available online. 1 The Organ lhttp://web.media.mit.edu/-holbrow/ mas/todmachoverofexperience_ Premier.wav The live performance of De L'Expdrience presented a unique challenge that fits well with the themes in this thesis. The acoustic pipe organ can project sound into space unlike any array of loudspeakers. This is especially true for an instrument as large and magnificent as the Pierre Beique Organ in the OSM concert hall, which has 6489 pipes and extends to approximately 1 meters above the stage. Our objective is to blend the sound of the organ with the sound of electronics. 6.1 Electronics The electronics in the piece included a mix of synthesizers, prerecorded acoustic cello, and other processed material from acoustic and electronic sources, all composed by Tod Machover. Prior to the performance, these sounds were mixed ambisonically: i. The cello was placed in front, occupying approximately the front hemisphere of our surround sound image. 2. The left and right channel of the electronic swells were panned to the left and right hemispheres. However, by default they

52 HYPERCOMPRESSION 52 were collapsed to omnidirectional mono (the sound comes from all directions, but has no stereo image). The gain of this synth was mapped to directionality, so when the synth grows louder, the left and right hemisphere become distinct from each other, creating an illusion of the sound is growing larger. 3. Additional sound sources are positioned in space, such that each has as wide an image as possible, but overlaps with others as little as possible. The overarching goal of this approach was to create a diverse but interesting spatial arrangement, while keeping sounds mostly panned in the same spot: movement comes from the warping of the surround image by the Hypercompressor. Sound Reinforcement Loudspeakers were positioned throughout OSM concert the hall. A number of factors went into the arrangement: audience coverage, surround coverage, rigging availability, and setup convenience. All speakers used were by Meyer Sound.? A single CQ-2 was positioned just behind and above the narrator to help localize the image of his voice. JM-iP speakers on stage left and stage right were also used for the voice of the narrator, and incorporated into the ambisonic playback system. Ten pairs of UPJ-1Ps were placed in the hall, filling in the sides and rear for ambisonic playback, two at the back of the hall, mirroring the CQ-2s on stage, four on each of the first and third balconies. The hall features variable acoustics, and curtains can be drawn into the hall to increase acoustic absorption and decrease reverb time. These were partially engaged, striking a balance: The reduced reverb time improved the clarity of amplified voice, while only marginally impacting the beautiful acoustic decay of the organ in the hall. The show was mixed by Ben Bloomberg. Ambisonic playback and multitrack recording of the performance was made possible with the help and expertise of Fabrice Boissin and Julien Boissinot and the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill University. 2 A note on composition, performance, and engineering No amount of engineering can compensate for poor composition, orchestration, or performance. A skilled engineer with the right tools only can only mitigate shortcomings in a performance. Good engineering starts and ends with good composition, arrangement and performance. I have been quite fortunate that all the musicians involved with De L'Expirience at every stage are of the highest caliber.

53 HYPERCOMPRESSION Live Hypercompression Technique During the performance, the encoded ambisonic electronic textures were patched into the main input of the Hypercompressor, before being decoded in realtime using the Rapture3D Advanced ambisonic decoder by Blue Ripple Sound. 3 Four microphones captured the sound of the organ: two inside and two hanging in front. The placement of the mics was intended to capture as much of the sound of the organ as possible, and as little of the sound of the amplified electronics in the hall. These four microphone signals where encoded to ambisonics in realtime, and the resulting ambisonic feed was patched into the side-chain input of the Hypercompressor. In this configuration, the organ drives the spatialization of the electronic sounds. By ambisonically panning the organ microphones, we can control how our electronics are spatialized. After some experimentation we discovered the best way to apply the Hypercompressor in the context of De L'Experience. When the organ was played softly, the sound of the electronics filled the performance hall from all directions. As the organ played louder, the electronic textures dynamically warped toward the organ in the front of the concert hall. The spatial and timbral movement of the electronics together with the magnificent (but stable) sound of the the organ created a unique blend that would be inaccessible with acoustic or electronic sounds in isolation. 3 products/rapture-3d- advanced

54 HYPERCOMPRESSION 54 Figure 6.1: The Pierre B6ique Organ in the OSM concert hall during a rehearsal on May 1 5 th, 215. Approximately 97% of the organs' 6489 pipes are out of sight behind the woodwork. Photo credit: Ben Bloomberg

55 7 Discussion and Analysis In the previous chapters we explored three new tools for creating and processing music, including their motivations and implementations. Stochastic Tempo Modulation in chapter 3 proposed a mathematical approach for composing previously inaccessible polytempic music. The Reflection Visualizer in chapter 4 introduced an interface for quickly sketching abstract architectural and musical ideas. Chapters 5 and 6 described the motivations for an implementation of a new technique for moving music in space and time. Each of these projects builds on Iannis Xenakis' theory of stochastic music and incorporates elements from other disciplines, including mathematics, computer science, acoustics, audio engineering and mixing, sound reinforcement, multimedia production, and live performance. This final chapter discusses how each project succeeded, how each project failed, and how future iterations can benefit from lessons learned during the development process. 7.1 Evaluation Criteria To evaluate a project of any kind, it is helpful to begin with a purpose, and then determine the granularity and scope of the evaluation. 1 We might evaluate a music recording for audio fidelity, for musical proficiency of the artist, for emotional impact or resonance, for narrative, for technological inovation, for creative vision, or for political and historical insight. Similarly, we can evaluate the suitability of an analog to digital converter (ADC) for a given purpose. If our purpose is music recording, we might prefer different qualities than if our purpose is electrical engineering. A recording engineer might prefer that the device impart a favorable sound, while an acoustician may prefer that the device be as neutral as possible. In the evaluation of a music recording, and the evaluation of an I Jerome H. Saltzer and M. Frans Kaashoek. Principles of Computer System Design An Introduction. Morgan Kaufmann, Burlington, MA, 29. ISBN

56 HYPERCOMPRESSION 56 ADC, we concern ourselves with only the highest level interface: When evaluating a music recording, we listen to the sound of the recording, but we do not evaluate the performance of the ADC used to make the recording. Evaluation is simplified when we consider fewer levels of abstraction. Stochastic music theory is a vertical integration of mathematics, the physics of sound, psychoacoustics, and music. The theory of stochastic music begins with the lowest level components of sound and ends with a creative musical product. What is a reasonable perspective from which to evaluate stochastic music? From the perspective of listening or performing the music? From the perspective of a historian, evaluating the environment that led to the composition or studying the impact on music afterwards? Should we try to make sense of the entire technology stack, or try to evaluate every layer of abstraction individually? Somehow, between the low-level elements of sound and a musical composition or performance, we transition from what is numerically quantifiable to what we can only attempt to describe. In my evaluation, I focus on two qualities. First, I study how each project achieved its original objectives and how it fell short. Second, I consider how each project can influence or inspire future iterations. I avoid comparative analysis or evaluation based on any kind of rubric. Instead, I evaluate the results of the project according to its own motivations and historical precedents. 7.2 Stochastic Tempo Modulation This chapter presents a very pure and elegant solution to a very complex problem. But is it important? Is it a significant improvement on the existing techniques presented in section 2.2? If a performer cannot play precise tempo curves anyway, what is this actually for? Western polytempic music as defined in chapter 3 has existed for only slightly over one century, and there is certainly room for new explorations. The oldest example of Western polytempic music is by Charles Ives in his 196 piece, Central Park in the Dark. 2 In the piece, the string section represents nighttime darkness, while the rest of the orchestra interprets the sounds of Central Park at night. Beginning at measure 64, Ives leaves a note in the score, describing how the orchestra accelerates, while the string section continues at a constant tempo: From measure 64 on, until the rest of the orchestra has played measure 118, the relation of the string orchestra's measures to those of the other instruments need not and cannot be written down 'John Greschak. Polytempo Music, An Annotated Bibliography, 23. URL http: // com/polytempo/ ptbib.htm

57 HYPERCOMPRESSION 57 exactly, as the gradual accelerando of all but the strings cannot be played in precisely the same tempi each time. Ives acknowledges that there is no existing notation to exactly describe the effect that he wants, and that musicians are not capable of playing the transition in a precise way. In this example, it is not important that the simultaneous tempi have a precise rhythmic relationship. Ives' use of parallel tempi is a graceful one. He achieves a particular effect without requiring the musicians to do something as difficult as accelerate and decelerate relative to each other, and then resynchronize at certain points. All polytempic compositions must grapple with the issue of synchronicity, and many demand more precision than Central Park in the Dark. Stockhausen's Gruppen uses polytempi very aggressively, going to great lengths to ensure that the three orchestras rhythmically synchronize and desynchronize in just the right way. If Stockhausen had been able to control the synchronicity of the tempo precisely, it seems likely that he would have wanted to try it. Some music (and perhaps stochastic music in particular) may be more interesting or influential from a theoretical perspective, than for the music itself in isolation. It could be that the possibilities unlocked through the equations derived in chapter 3 are not different enough from the approximations used by Nancarrow and Cage or that it is unrealistic to direct performers to play them accurately enough to perceive the difference. However it is surprising that current digital tools for composition do not let us even try Stochastic Tempo Modulation. If we want to hear what tempo transitions like the ones describe here sound like using digital technology, there is no software that lets us do so, and we are still forced to approximate. Audio programming languages like Max and SuperCollider let us code formulaic tempi into our compositions, but equations like the ones derived here are still required. I could not find any technique that lets us create swarms of tempo accelerations that fit the constraints described in chapter 3, or any musical example that proposed to have found another solution. For some cases approximation is perfectly acceptable. If a musician is incapable of playing the part, we are also likely incapable of hearing the subtleties that distinguish an approximation from a perfect performance. However, if we want large collections of simultaneous polytempi, like the ones shown in figures 3.2 and 3.3, the approximations possible with transcriptions, or the approximations of an unassisted human performers, are not precise enough.

58 HYPERCOMPRESSION 58 Future Directions Bryn Bliska's composition (linked in section 3-5) is a good starting point for future explorations, but it was composed with an earlier version of the polytempic equation that did not allow for large swarms of tempi. In its current state, the polytempic work described in this thesis is just a beginning. We have not yet tried to compose a piece that fully incorporates Stochastic Tempo Modulation. Equations alone do not make a musical instrument, and composition is difficult without a musical interface. There are a few modern examples of polytempic projects (see chapter 3), but I could not find any examples of interfaces for composing with coordinated mass of tempi. The most exciting direction for this project is the creation of new musical interfaces for composing and manipulating stochastic tempo swarms. 7-3 Reflection Visualizer This project provides a single abstract interface that approaches composition of space (architecture) and the composition of music at the same time. The forms that it makes are familiar from the ruled surfaces seen in Xenakis' compositions and early sketches of the Philips Pavilion. From a musical perspective, we can think of the x and y axes representing time and pitch. From an architectural perspective the canvas might represent the floor plan of spaces we are designing. While it is interesting to switch our perspective between the two modes, there is not a clear connection from one to the other. A carefully designed surface or reflection in one mode would be quite arbitrary in the other mode. The reason that the interface is capable of working in both modes is because it is so abstract that it does not commit to one or the other. This is not a complete failing: The tool was really designed to be a brainstorming aid at the very beginning of the design process. It can be much simpler and quicker to use than proper architectural software as a means of creating abstract shapes, similar to sketching on paper, before turning to specialized software for more detailed design. Curves, Constraints, and Simplicity Despite the limitations of this project, the parts that worked well do form a strong base for future iterations. There is something simple and fun about the user interface. There is only one input action; dragging a control point. It is immediately clear what each control point does when it is moved. It is easy to not even notice that there are five different types of control points and each has slightly

59 HYPERCOMPRESSION 59 different behavior. It is very intuitive to adjust a reflection surface such that the red beams focus on a certain point, and then readjust a reflection surface so that they diverge chaotically There is something fascinating about how the simple movements intuitively produce coordinated or chaotic stochastic results. The red "sound lines" have three degrees of freedom: position, direction, and length. We can point the rays in any direction we like, but their movement is somewhat constrained. The projection angle is locked to 3 degrees and the number of beams is always eight, and most of the flexibility from the interface comes from the reflective surfaces. Stochastic by Default The Reflection Visualizer interface makes it easier to draw a curving reflective surface than a straight one. If you make a special effort, it is possible to make one of the surfaces straight, but just like drawing a line on a paper with a pen, curved surfaces come more naturally. The curves in the Reflection Visualizer do not come naturally because they are following an input gesture like most "drawing" interfaces, but because of the simple mathematics in the of the Bezier curves. If we consider the red lines to be notes on a time/pitch axis, the default interpretation is stochastic glissandi rather than static pitches. Most musical software assumes static pitches by default and most architectural software assumes straight lines. Future Directions The obvious next steps for this project involve correcting the shortcomings described above. It could be made to work in three dimensions, and model precise propagation of sound rather than a very simplified abstraction: It could become a proper acoustical simulator. Another possibility is turning it into a compositional or performative musical instrument where we can hear the stochastic glissandi in realtime. These options are not necessarily mutually exclusive, but as the interface becomes tailored to a more specific application, our ability to think about the content as abstract representations also breaks down. The ideal of software that is equally well-equipped to compose music and to imagine architectural spaces is probably unrealistic. Any visual representation of music is quite abstract, and different visual representations can encourage us to think about music in new and unusual ways. For example each red line can be considered pitch, but it can also be considered its own time axis. By calculating the red paths, we can creating many time axes that follow similar but slightly different trajectories. Alternatively, each red line can be thought of as a time axis for an individual

60 HYPERCOMPRESSION 6o pitch. When the lines collide with a curved surface after slightly different lengths, it represents an arpeggiated chord. In contrast, a non-arpeggiated chord is represented when the red lines all collide with a surface after traveling identical distances. The abstract nature of this interface leaves room for our imagination to interpret unexpected new musical possibilities. 7.4 Hypercompression The design and development of Hypercompression happened in parallel with pre-production for De L'Expirience, and the Hypercompressor was, in part, tailored to the needs of a somewhat unique situation. The resulting project leaves significant design questions surrounding ambisonic dynamic range compression unanswered. For example: What is the best way to detect and attenuate a region on our surround sphere that is an unusual or elongated shape? Should the compressor attempt to attenuate the narrow region only? Should we attenuate the center of the region more than the edge? When a region of our surround sound image exceeds the Hypercompressor's threshold, the compressor warps the surround image in addition to attenuating the region where the threshold overage occurred. This makes sense for side-chain compression, but is less applicable to standard compression. We could have chosen only warping, or only attenuation, each of which represents its own compromise: " We could simply warp all sounds away from a region that exceeds the compression threshold without attenuating them at all. However, doing so would increase the perceived level of the sound coming from the opposite direction. We also run the risk of creating sonic "ping-pong" of sounds arbitrarily panning. This can sound exciting, but quickly becomes a contrivance or gimmick. " If we simply attenuate a region that exceeds the threshold, we are not taking advantage of the opportunities provided to us by surround sound in the first place. In side-chain mode, we risk hiding a compressed sound completely when we could simply warp that region of the surround field to a location where it can be heard more clearly. The current implementation also does not handle the case when two separate regions of the surround field both exceed the threshold.

61 HYPERCOMPRESSION 61 De L'Expirience The main goal of using the Hypercompressor was to blend the electronic textures with the sound of the Pierre B6ique organ in Tod Machover's composition. The chosen approach was to give the electronics a sense of motion that the organ (whose sound is awe-inspiring, but also somewhat static) cannot produce; thus the electronics can be heard moving around the sound of the organ, rather than being required to compete with the sound of the organ. The first attempt at this goal, however, did not go as planned. The electronics were mixed to occupy as much of the surround sound sphere as possible, filling the entire room with sound. My original idea was to spatially separate the organ and electronics by connecting them to the Hypercompressor in side-chain mode. When the organ was playing it would push the sound of the electronics to the back of the room, making it easier to hear both timbres without either masking the other. During the De L'Expirience rehearsal, this was the first approach I tried, but the resulting surround texture had a different problem: The sound of the organ and the sound of the electronics were too separate. They did not blend with each other in space, but existed as two clearly distinct sources. I arrived at the solution described in chapter 5 only after first trying the exact opposite of the final approach. While I had to revise my strategy during the rehearsal, I consider the Hypercompressor to have aided the blending of the organ and electronics especially well. It is important to note that the beautiful blend of sounds that we achieved would not have been possible without many other contributing factors, such as the expert composition of the electronic textures. Future Directions The next step is to make a fully featured surround compressor with options for warping and attenuation. Parametric control over the width and spatial resolution of the regions to be attenuated could also help turn the Hypercompressor into a general purpose tool that would be useful in a wide variety of different situations. A more creative path would be to add additional directional effects that can be modulated by the gain of audio content in the surround image. For example, when a region of the surround sphere exceeds our threshold, we could apply a sliding delay to that region. We could also offer a selection of different effects that can all be modulated by the positional gain of the sound spere. We could even add spatial detection of other sound qualities and create a modulation matrix. For example, we might use a high frequency filter to modulate a phasing effect. This would involve detecting the regions of the sound sphere that

62 HYPERCOMPRESSION 62 have the most high frequency content, and then proportionally applying a phasing effect to those regions. By expanding on the paradigm of compression in these ways, we unlock possibilites for previously unimaginable surround soundscapes. 7-5 Stochos Hypercompression is a complete realization of a musical idea. Beginning with an objective and a mathematical foundation, we designed and built a custom software implementation and applied it in a live performance context. A study of the process has revealed what is probably the greatest strength of stochastic music theory: The vertical integration of the theory of sound and music lets us study music from a privileged perspective, while the controlled chance built into the system helps us to uncover possibilites that could not be found with conventional means. In the case of Hypercompression, we move sounds in space based on matrix transforms, that are themselves driven by the controlled chance of a random performance. The angular position of our sounds are defined by both explicit mathematical formulas and the unpredictable qualities of live performance. From a broad perspective all three projects emerged "by chance" in the same way. Each one is the result of musical exploration in a space that indiscriminately draws from mathematics, computer science, acoustics, audio engineering and mixing, sound reinforcement, multimedia production, and live performance. By treating all these disciplines as components of music theory, we discover new musical patterns and possibilities for shaping sound in time and space.

63 Epilogue In 24, the Culture 2 Programme, created by the European Union approved a grant to an Italian multimedia firm for a project called Virtual Electronic Poem (VEP). 3 The project proposed the creation of a virtual reality experience in which users could enter a simulated version of the famous Phillips Pavilion. While developing the VEP, the design team went through the archives of Xenakis, Le Corbusier, and Philips, uncovering every relevant bit of information in order to make the experience as real as possible. 4 Virtual reality technology changed so much between 24 and 215 that reviving the VEP project today would likely involve an additional multimedia archeology expedition as intensive as the first: It would probably be easier (and more effective) to start from scratch using the original documentation. A common problem with multimedia performances is that technology changes so fase that it quickly becomes very difficult to restore even moderately recent projects. 5 In contrast, the mathematical language that Xenakis used to describe his work is as well-established as the language of Western music notation, and for this reason we have a surprisingly thorough understanding of his music today. It is my hope that the documentation in this thesis will provide an equally dependable and enduring a description of the process of modern musical composition. 3 Culture 2 Programme Results. Technical report, European Union Culture 2 Programme, 24. URL http: //ec. eu ropa. eu/ cultu re/tools/documents/cultu re- 2/24/heritage. pdf 4 Vincenzo Lombardo, Andrea Valle, John Fitch, Kees Tazelaar, and Stefan Weinzierl. A Virtual-Reality Reconstruction of Poeme Based on Electronique Philological Research. Computer Music Journal, 33(2):24-47, 29 5 Vincenzo Lombardo, Andrea Valle, Fabrizio Nunnari, Francesco Giordana, Andrea Arghinenti, UniversitA Torino, and M M Park. Archeology of Multimedia. In ACM International Conference on Multimedia, pages , 26. ISBN

H Y P E R C O M P R E S S I O N Stochastic Musical Processing

H Y P E R C O M P R E S S I O N Stochastic Musical Processing H Y P E R C O M P R E S S I O N Stochastic Musical Processing Charles J. Holbrow Bachelor of Music University of Massachusetts Lowell, 2008 Submitted to the Program in Media Arts and Sciences, School of

More information

Poème Électronique (1958) Edgard Varèse

Poème Électronique (1958) Edgard Varèse 1 TAPE MUSIC Poème Électronique (1958) Edgard Varèse 8 2 3 MAGNETIC TAPE MAGNETIC TAPE 1928: Fritz Pfleumer invented magnetic tape for sound recording (German-Austrian engineer) 1930s: Magnetophone (AEG,

More information

Chapter 12. Meeting 12, History: Iannis Xenakis

Chapter 12. Meeting 12, History: Iannis Xenakis Chapter 12. Meeting 12, History: Iannis Xenakis 12.1. Announcements Musical Design Report 3 due 6 April Start thinking about sonic system projects 12.2. Quiz 10 Minutes 12.3. Xenakis: Background An architect,

More information

Copyright 2009 Pearson Education, Inc. or its affiliate(s). All rights reserved. NES, the NES logo, Pearson, the Pearson logo, and National

Copyright 2009 Pearson Education, Inc. or its affiliate(s). All rights reserved. NES, the NES logo, Pearson, the Pearson logo, and National Music (504) NES, the NES logo, Pearson, the Pearson logo, and National Evaluation Series are trademarks in the U.S. and/or other countries of Pearson Education, Inc. or its affiliate(s). NES Profile: Music

More information

LESSON 1 PITCH NOTATION AND INTERVALS

LESSON 1 PITCH NOTATION AND INTERVALS FUNDAMENTALS I 1 Fundamentals I UNIT-I LESSON 1 PITCH NOTATION AND INTERVALS Sounds that we perceive as being musical have four basic elements; pitch, loudness, timbre, and duration. Pitch is the relative

More information

Music Theory. Fine Arts Curriculum Framework. Revised 2008

Music Theory. Fine Arts Curriculum Framework. Revised 2008 Music Theory Fine Arts Curriculum Framework Revised 2008 Course Title: Music Theory Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Music Theory Music Theory is a two-semester course

More information

Claude Debussy. Biography: Compositional Style: Major Works List:

Claude Debussy. Biography: Compositional Style: Major Works List: Claude Debussy Biography: Compositional Style: Major Works List: Analysis: Debussy "La cathédrale engloutie" from Preludes, Book I (1910) Discuss the Aesthetic Style this piece belongs to. Diagram the

More information

2014 Music Performance GA 3: Aural and written examination

2014 Music Performance GA 3: Aural and written examination 2014 Music Performance GA 3: Aural and written examination GENERAL COMMENTS The format of the 2014 Music Performance examination was consistent with examination specifications and sample material on the

More information

Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series

Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series -1- Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series JERICA OBLAK, Ph. D. Composer/Music Theorist 1382 1 st Ave. New York, NY 10021 USA Abstract: - The proportional

More information

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Ligeti once said, " In working out a notational compositional structure the decisive factor is the extent to which it

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

3. Berlioz Harold in Italy: movement III (for Unit 3: Developing Musical Understanding)

3. Berlioz Harold in Italy: movement III (for Unit 3: Developing Musical Understanding) 3. Berlioz Harold in Italy: movement III (for Unit 3: Developing Musical Understanding) Background information Biography Berlioz was born in 1803 in La Côte Saint-André, a small town between Lyon and Grenoble

More information

Music Theory: A Very Brief Introduction

Music Theory: A Very Brief Introduction Music Theory: A Very Brief Introduction I. Pitch --------------------------------------------------------------------------------------- A. Equal Temperament For the last few centuries, western composers

More information

GRADUATE PLACEMENT EXAMINATIONS - COMPOSITION

GRADUATE PLACEMENT EXAMINATIONS - COMPOSITION McGILL UNIVERSITY SCHULICH SCHOOL OF MUSIC GRADUATE PLACEMENT EXAMINATIONS - COMPOSITION All students beginning graduate studies in Composition, Music Education, Music Technology and Theory are required

More information

Toward the Adoption of Design Concepts in Scoring for Digital Musical Instruments: a Case Study on Affordances and Constraints

Toward the Adoption of Design Concepts in Scoring for Digital Musical Instruments: a Case Study on Affordances and Constraints Toward the Adoption of Design Concepts in Scoring for Digital Musical Instruments: a Case Study on Affordances and Constraints Raul Masu*, Nuno N. Correia**, and Fabio Morreale*** * Madeira-ITI, U. Nova

More information

Instrumental Music III. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music III. Fine Arts Curriculum Framework. Revised 2008 Instrumental Music III Fine Arts Curriculum Framework Revised 2008 Course Title: Instrumental Music III Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Instrumental Music III Instrumental

More information

Curriculum Framework for Performing Arts

Curriculum Framework for Performing Arts Curriculum Framework for Performing Arts School: Mapleton Charter School Curricular Tool: Teacher Created Grade: K and 1 music Although skills are targeted in specific timeframes, they will be reinforced

More information

Registration Reference Book

Registration Reference Book Exploring the new MUSIC ATELIER Registration Reference Book Index Chapter 1. The history of the organ 6 The difference between the organ and the piano 6 The continued evolution of the organ 7 The attraction

More information

Music Representations

Music Representations Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) "The reason I got into playing and producing music was its power to travel great distances and have an emotional impact on people" Quincey

More information

Lecture 21: Mathematics and Later Composers: Babbitt, Messiaen, Boulez, Stockhausen, Xenakis,...

Lecture 21: Mathematics and Later Composers: Babbitt, Messiaen, Boulez, Stockhausen, Xenakis,... Lecture 21: Mathematics and Later Composers: Babbitt, Messiaen, Boulez, Stockhausen, Xenakis,... Background By 1946 Schoenberg s students Berg and Webern were both dead, and Schoenberg himself was at the

More information

Instrumental Music II. Fine Arts Curriculum Framework

Instrumental Music II. Fine Arts Curriculum Framework Instrumental Music II Fine Arts Curriculum Framework Strand: Skills and Techniques Content Standard 1: Students shall apply the essential skills and techniques to perform music. ST.1.IMII.1 Demonstrate

More information

Instrumental Music II. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music II. Fine Arts Curriculum Framework. Revised 2008 Instrumental Music II Fine Arts Curriculum Framework Revised 2008 Course Title: Instrumental Music II Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Instrumental Music II Instrumental

More information

Rachel Hocking Assignment Music 2Y Student No Music 1 - Music for Small Ensembles

Rachel Hocking Assignment Music 2Y Student No Music 1 - Music for Small Ensembles Music 1 - Music for Small Ensembles This unit is designed for a Music 1 class in the first term of the HSC course. The learning focus will be on reinforcing the musical concepts, widening student repertoire

More information

Beethoven s Fifth Sine -phony: the science of harmony and discord

Beethoven s Fifth Sine -phony: the science of harmony and discord Contemporary Physics, Vol. 48, No. 5, September October 2007, 291 295 Beethoven s Fifth Sine -phony: the science of harmony and discord TOM MELIA* Exeter College, Oxford OX1 3DP, UK (Received 23 October

More information

MODELING AND SIMULATION: THE SPECTRAL CANON FOR CONLON NANCARROW BY JAMES TENNEY

MODELING AND SIMULATION: THE SPECTRAL CANON FOR CONLON NANCARROW BY JAMES TENNEY MODELING AND SIMULATION: THE SPECTRAL CANON FOR CONLON NANCARROW BY JAMES TENNEY Charles de Paiva Santana, Jean Bresson, Moreno Andreatta UMR STMS, IRCAM-CNRS-UPMC 1, place I.Stravinsly 75004 Paris, France

More information

MUSIC (MUS) Music (MUS) 1

MUSIC (MUS) Music (MUS) 1 Music (MUS) 1 MUSIC (MUS) MUS 2 Music Theory 3 Units (Degree Applicable, CSU, UC, C-ID #: MUS 120) Corequisite: MUS 5A Preparation for the study of harmony and form as it is practiced in Western tonal

More information

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Program: Music Number of Courses: 52 Date Updated: 11.19.2014 Submitted by: V. Palacios, ext. 3535 ILOs 1. Critical Thinking Students apply

More information

Instrumental Music Curriculum

Instrumental Music Curriculum Instrumental Music Curriculum Instrumental Music Course Overview Course Description Topics at a Glance The Instrumental Music Program is designed to extend the boundaries of the gifted student beyond the

More information

Haydn: Symphony No. 97 in C major, Hob. I:97. the Esterhazy court. This meant that the wonderful composer was stuck in one area for a large

Haydn: Symphony No. 97 in C major, Hob. I:97. the Esterhazy court. This meant that the wonderful composer was stuck in one area for a large Haydn: Symphony No. 97 in C major, Hob. I:97 Franz Joseph Haydn, a brilliant composer, was born on March 31, 1732 in Austria and died May 13, 1809 in Vienna. For nearly thirty years Haydn was employed

More information

Chapter Five: The Elements of Music

Chapter Five: The Elements of Music Chapter Five: The Elements of Music What Students Should Know and Be Able to Do in the Arts Education Reform, Standards, and the Arts Summary Statement to the National Standards - http://www.menc.org/publication/books/summary.html

More information

Lecture 1: What we hear when we hear music

Lecture 1: What we hear when we hear music Lecture 1: What we hear when we hear music What is music? What is sound? What makes us find some sounds pleasant (like a guitar chord) and others unpleasant (a chainsaw)? Sound is variation in air pressure.

More information

Keyboard Version. Instruction Manual

Keyboard Version. Instruction Manual Jixis TM Graphical Music Systems Keyboard Version Instruction Manual The Jixis system is not a progressive music course. Only the most basic music concepts have been described here in order to better explain

More information

44. Jerry Goldsmith Planet of the Apes: The Hunt (opening) (for Unit 6: Further Musical Understanding)

44. Jerry Goldsmith Planet of the Apes: The Hunt (opening) (for Unit 6: Further Musical Understanding) 44. Jerry Goldsmith Planet of the Apes: The Hunt (opening) (for Unit 6: Further Musical Understanding) Background information and performance circumstances Biography Jerry Goldsmith was born in 1929. Goldsmith

More information

Missouri Educator Gateway Assessments

Missouri Educator Gateway Assessments Missouri Educator Gateway Assessments FIELD 043: MUSIC: INSTRUMENTAL & VOCAL June 2014 Content Domain Range of Competencies Approximate Percentage of Test Score I. Music Theory and Composition 0001 0003

More information

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008 Instrumental Music I Fine Arts Curriculum Framework Revised 2008 Course Title: Instrumental Music I Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Instrumental Music I Instrumental

More information

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016 Grade Level: 9 12 Subject: Jazz Ensemble Time: School Year as listed Core Text: Time Unit/Topic Standards Assessments 1st Quarter Arrange a melody Creating #2A Select and develop arrangements, sections,

More information

How to Obtain a Good Stereo Sound Stage in Cars

How to Obtain a Good Stereo Sound Stage in Cars Page 1 How to Obtain a Good Stereo Sound Stage in Cars Author: Lars-Johan Brännmark, Chief Scientist, Dirac Research First Published: November 2017 Latest Update: November 2017 Designing a sound system

More information

Chapter 1 Overview of Music Theories

Chapter 1 Overview of Music Theories Chapter 1 Overview of Music Theories The title of this chapter states Music Theories in the plural and not the singular Music Theory or Theory of Music. Probably no single theory will ever cover the enormous

More information

Ben Neill and Bill Jones - Posthorn

Ben Neill and Bill Jones - Posthorn Ben Neill and Bill Jones - Posthorn Ben Neill Assistant Professor of Music Ramapo College of New Jersey 505 Ramapo Valley Road Mahwah, NJ 07430 USA bneill@ramapo.edu Bill Jones First Pulse Projects 53

More information

Additional Orchestration Concepts

Additional Orchestration Concepts Additional Orchestration Concepts This brief, online supplement presents additional information related to instrumentation and orchestration, which are covered in Chapter 12 of the text. Here, you will

More information

Third Grade Music Curriculum

Third Grade Music Curriculum Third Grade Music Curriculum 3 rd Grade Music Overview Course Description The third-grade music course introduces students to elements of harmony, traditional music notation, and instrument families. The

More information

Master's Theses and Graduate Research

Master's Theses and Graduate Research San Jose State University SJSU ScholarWorks Master's Theses Master's Theses and Graduate Research Fall 2010 String Quartet No. 1 Jeffrey Scott Perry San Jose State University Follow this and additional

More information

Boulez. Aspects of Pli Selon Pli. Glen Halls All Rights Reserved.

Boulez. Aspects of Pli Selon Pli. Glen Halls All Rights Reserved. Boulez. Aspects of Pli Selon Pli Glen Halls All Rights Reserved. "Don" is the first movement of Boulez' monumental work Pli Selon Pli, subtitled Improvisations on Mallarme. One of the most characteristic

More information

Introduction to Instrumental and Vocal Music

Introduction to Instrumental and Vocal Music Introduction to Instrumental and Vocal Music Music is one of humanity's deepest rivers of continuity. It connects each new generation to those who have gone before. Students need music to make these connections

More information

Chapter 23. New Currents After Thursday, February 7, 13

Chapter 23. New Currents After Thursday, February 7, 13 Chapter 23 New Currents After 1945 The Quest for Innovation one approach: divide large ensembles into individual parts so the sonority could shift from one kind of mass (saturation) to another (unison),

More information

DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks

DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks OVERALL STUDENT OBJECTIVES FOR THE UNIT: Students taking Instrumental Music

More information

Stochastic synthesis: An overview

Stochastic synthesis: An overview Stochastic synthesis: An overview Sergio Luque Department of Music, University of Birmingham, U.K. mail@sergioluque.com - http://www.sergioluque.com Proceedings of the Xenakis International Symposium Southbank

More information

Proceedings of the 7th WSEAS International Conference on Acoustics & Music: Theory & Applications, Cavtat, Croatia, June 13-15, 2006 (pp54-59)

Proceedings of the 7th WSEAS International Conference on Acoustics & Music: Theory & Applications, Cavtat, Croatia, June 13-15, 2006 (pp54-59) Common-tone Relationships Constructed Among Scales Tuned in Simple Ratios of the Harmonic Series and Expressed as Values in Cents of Twelve-tone Equal Temperament PETER LUCAS HULEN Department of Music

More information

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION ABSTRACT We present a method for arranging the notes of certain musical scales (pentatonic, heptatonic, Blues Minor and

More information

Doctor of Philosophy

Doctor of Philosophy University of Adelaide Elder Conservatorium of Music Faculty of Humanities and Social Sciences Declarative Computer Music Programming: using Prolog to generate rule-based musical counterpoints by Robert

More information

0410 MUSIC. Mark schemes should be read in conjunction with the question paper and the Principal Examiner Report for Teachers.

0410 MUSIC. Mark schemes should be read in conjunction with the question paper and the Principal Examiner Report for Teachers. CAMBRIDGE INTERNATIONAL EXAMINATIONS International General Certificate of Secondary Education MARK SCHEME for the May/June 2014 series 0410 MUSIC 0410/13 Paper 1 (Listening), maximum raw mark 70 This mark

More information

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05 Computing, Artificial Intelligence, and Music A History and Exploration of Current Research Josh Everist CS 427 5/12/05 Introduction. As an art, music is older than mathematics. Humans learned to manipulate

More information

Tonality Tonality is how the piece sounds. The most common types of tonality are major & minor these are tonal and have a the sense of a fixed key.

Tonality Tonality is how the piece sounds. The most common types of tonality are major & minor these are tonal and have a the sense of a fixed key. Name: Class: Ostinato An ostinato is a repeated pattern of notes or phrased used within classical music. It can be a repeated melodic phrase or rhythmic pattern. Look below at the musical example below

More information

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France Cort Lippe 1 Real-time Granular Sampling Using the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France Running Title: Real-time Granular Sampling [This copy of this

More information

Cambridge International Examinations Cambridge International General Certifi cate of Secondary Education

Cambridge International Examinations Cambridge International General Certifi cate of Secondary Education Cambridge International Examinations Cambridge International General Certifi cate of Secondary Education MUSIC 040/0 Paper Listening For examination from 05 MARK SCHEME Maximum Mark: 70 Specimen The syllabus

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

MUSIC PERFORMANCE: GROUP

MUSIC PERFORMANCE: GROUP Victorian Certificate of Education 2003 SUPERVISOR TO ATTACH PROCESSING LABEL HERE STUDENT NUMBER Letter Figures Words MUSIC PERFORMANCE: GROUP Aural and written examination Friday 21 November 2003 Reading

More information

REPORT ON THE NOVEMBER 2009 EXAMINATIONS

REPORT ON THE NOVEMBER 2009 EXAMINATIONS THEORY OF MUSIC REPORT ON THE NOVEMBER 2009 EXAMINATIONS General Accuracy and neatness are crucial at all levels. In the earlier grades there were examples of notes covering more than one pitch, whilst

More information

Music (MUS) Courses. Music (MUS) 1

Music (MUS) Courses. Music (MUS) 1 Music (MUS) 1 Music (MUS) Courses MUS 121 Introduction to Music Listening (3 Hours) This course is designed to enhance student music listening. Students will learn to identify changes in the elements of

More information

UNIVERSITY COLLEGE DUBLIN NATIONAL UNIVERSITY OF IRELAND, DUBLIN MUSIC

UNIVERSITY COLLEGE DUBLIN NATIONAL UNIVERSITY OF IRELAND, DUBLIN MUSIC UNIVERSITY COLLEGE DUBLIN NATIONAL UNIVERSITY OF IRELAND, DUBLIN MUSIC SESSION 2000/2001 University College Dublin NOTE: All students intending to apply for entry to the BMus Degree at University College

More information

2014 Music Style and Composition GA 3: Aural and written examination

2014 Music Style and Composition GA 3: Aural and written examination 2014 Music Style and Composition GA 3: Aural and written examination GENERAL COMMENTS The 2014 Music Style and Composition examination consisted of two sections, worth a total of 100 marks. Both sections

More information

Book review: Conducting for a new era, by Edwin Roxburgh

Book review: Conducting for a new era, by Edwin Roxburgh Book review: Conducting for a new era, by Edwin Roxburgh MICHAEL DOWNES The Scottish Journal of Performance Volume 3, Issue 1; June 2016 ISSN: 2054-1953 (Print) / ISSN: 2054-1961 (Online) Publication details:

More information

Extending Interactive Aural Analysis: Acousmatic Music

Extending Interactive Aural Analysis: Acousmatic Music Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.

More information

Level performance examination descriptions

Level performance examination descriptions Unofficial translation from the original Finnish document Level performance examination descriptions LEVEL PERFORMANCE EXAMINATION DESCRIPTIONS Accordion, kantele, guitar, piano and organ... 6 Accordion...

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

Power Standards and Benchmarks Orchestra 4-12

Power Standards and Benchmarks Orchestra 4-12 Power Benchmark 1: Singing, alone and with others, a varied repertoire of music. Begins ear training Continues ear training Continues ear training Rhythm syllables Outline triads Interval Interval names:

More information

UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM)

UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM) UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM) 1. SOUND, NOISE AND SILENCE Essentially, music is sound. SOUND is produced when an object vibrates and it is what can be perceived by a living organism through

More information

Music Curriculum Glossary

Music Curriculum Glossary Acappella AB form ABA form Accent Accompaniment Analyze Arrangement Articulation Band Bass clef Beat Body percussion Bordun (drone) Brass family Canon Chant Chart Chord Chord progression Coda Color parts

More information

Vigil (1991) for violin and piano analysis and commentary by Carson P. Cooman

Vigil (1991) for violin and piano analysis and commentary by Carson P. Cooman Vigil (1991) for violin and piano analysis and commentary by Carson P. Cooman American composer Gwyneth Walker s Vigil (1991) for violin and piano is an extended single 10 minute movement for violin and

More information

3 against 2. Acciaccatura. Added 6th. Augmentation. Basso continuo

3 against 2. Acciaccatura. Added 6th. Augmentation. Basso continuo 3 against 2 Acciaccatura One line of music may be playing quavers in groups of two whilst at the same time another line of music will be playing triplets. Other note values can be similarly used. An ornament

More information

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor Introduction: The ability to time stretch and compress acoustical sounds without effecting their pitch has been an attractive

More information

ILLINOIS LICENSURE TESTING SYSTEM

ILLINOIS LICENSURE TESTING SYSTEM ILLINOIS LICENSURE TESTING SYSTEM FIELD 212: MUSIC January 2017 Effective beginning September 3, 2018 ILLINOIS LICENSURE TESTING SYSTEM FIELD 212: MUSIC January 2017 Subarea Range of Objectives I. Responding:

More information

MARK SCHEME for the May/June 2011 question paper for the guidance of teachers 0410 MUSIC

MARK SCHEME for the May/June 2011 question paper for the guidance of teachers 0410 MUSIC UNIVERSITY OF CAMBRIDGE INTERNATIONAL EXAMINATIONS International General Certificate of Secondary Education www.xtremepapers.com MARK SCHEME for the May/June 2011 question paper for the guidance of teachers

More information

Reading Music: Common Notation. By: Catherine Schmidt-Jones

Reading Music: Common Notation. By: Catherine Schmidt-Jones Reading Music: Common Notation By: Catherine Schmidt-Jones Reading Music: Common Notation By: Catherine Schmidt-Jones Online: C O N N E X I O N S Rice University,

More information

ILLINOIS LICENSURE TESTING SYSTEM

ILLINOIS LICENSURE TESTING SYSTEM ILLINOIS LICENSURE TESTING SYSTEM FIELD 143: MUSIC November 2003 Illinois Licensure Testing System FIELD 143: MUSIC November 2003 Subarea Range of Objectives I. Listening Skills 01 05 II. Music Theory

More information

MUSIC TECHNOLOGY MASTER OF MUSIC PROGRAM (33 CREDITS)

MUSIC TECHNOLOGY MASTER OF MUSIC PROGRAM (33 CREDITS) MUSIC TECHNOLOGY MASTER OF MUSIC PROGRAM (33 CREDITS) The Master of Music in Music Technology builds upon the strong foundation of an undergraduate degree in music. Students can expect a rigorous graduate-level

More information

Algorithmic Music Composition

Algorithmic Music Composition Algorithmic Music Composition MUS-15 Jan Dreier July 6, 2015 1 Introduction The goal of algorithmic music composition is to automate the process of creating music. One wants to create pleasant music without

More information

MUSIC GROUP PERFORMANCE

MUSIC GROUP PERFORMANCE Victorian Certificate of Education 2010 SUPERVISOR TO ATTACH PROCESSING LABEL HERE STUDENT NUMBER Letter Figures Words MUSIC GROUP PERFORMANCE Aural and written examination Monday 1 November 2010 Reading

More information

Haydn: Symphony No. 101 second movement, The Clock Listening Exam Section B: Study Pieces

Haydn: Symphony No. 101 second movement, The Clock Listening Exam Section B: Study Pieces Haydn: Symphony No. 101 second movement, The Clock Listening Exam Section B: Study Pieces AQA Specimen paper: 2 Rhinegold Listening tests book: 4 Renaissance Practice Paper 1: 6 Renaissance Practice Paper

More information

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France email: lippe@ircam.fr Introduction.

More information

Singing Techniques and Performance

Singing Techniques and Performance Unit 42: Singing Techniques and Performance Unit code: QCF Level 3: Credit value: 10 Guided learning hours: 60 Aim and purpose A/502/5112 BTEC National This unit encourages the development and maintenance

More information

Loudoun County Public Schools Elementary (1-5) General Music Curriculum Guide Alignment with Virginia Standards of Learning

Loudoun County Public Schools Elementary (1-5) General Music Curriculum Guide Alignment with Virginia Standards of Learning Loudoun County Public Schools Elementary (1-5) General Music Curriculum Guide Alignment with Virginia Standards of Learning Grade One Rhythm perform, and create rhythms and rhythmic patterns in a variety

More information

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016 Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016 The two most fundamental dimensions of music are rhythm (time) and pitch. In fact, every staff of written music is essentially an X-Y coordinate

More information

Standard 1 PERFORMING MUSIC: Singing alone and with others

Standard 1 PERFORMING MUSIC: Singing alone and with others KINDERGARTEN Standard 1 PERFORMING MUSIC: Singing alone and with others Students sing melodic patterns and songs with an appropriate tone quality, matching pitch and maintaining a steady tempo. K.1.1 K.1.2

More information

King Edward VI College, Stourbridge Starting Points in Composition and Analysis

King Edward VI College, Stourbridge Starting Points in Composition and Analysis King Edward VI College, Stourbridge Starting Points in Composition and Analysis Name Dr Tom Pankhurst, Version 5, June 2018 [BLANK PAGE] Primary Chords Key terms Triads: Root: all the Roman numerals: Tonic:

More information

An integrated granular approach to algorithmic composition for instruments and electronics

An integrated granular approach to algorithmic composition for instruments and electronics An integrated granular approach to algorithmic composition for instruments and electronics James Harley jharley239@aol.com 1. Introduction The domain of instrumental electroacoustic music is a treacherous

More information

MELODIC NOTATION UNIT TWO

MELODIC NOTATION UNIT TWO MELODIC NOTATION UNIT TWO This is the equivalence between Latin and English notation: Music is written in a graph of five lines and four spaces called a staff: 2 Notes that extend above or below the staff

More information

Music Representations

Music Representations Advanced Course Computer Science Music Processing Summer Term 00 Music Representations Meinard Müller Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Music Representations Music Representations

More information

Simple Harmonic Motion: What is a Sound Spectrum?

Simple Harmonic Motion: What is a Sound Spectrum? Simple Harmonic Motion: What is a Sound Spectrum? A sound spectrum displays the different frequencies present in a sound. Most sounds are made up of a complicated mixture of vibrations. (There is an introduction

More information

Music. Music Instrumental. Program Description. Fine & Applied Arts/Behavioral Sciences Division

Music. Music Instrumental. Program Description. Fine & Applied Arts/Behavioral Sciences Division Fine & Applied Arts/Behavioral Sciences Division (For Meteorology - See Science, General ) Program Description Students may select from three music programs Instrumental, Theory-Composition, or Vocal.

More information

Florida Performing Fine Arts Assessment Item Specifications for Benchmarks in Course: Chorus 5 Honors

Florida Performing Fine Arts Assessment Item Specifications for Benchmarks in Course: Chorus 5 Honors Task A/B/C/D Item Type Florida Performing Fine Arts Assessment Course Title: Chorus 5 Honors Course Number: 1303340 Abbreviated Title: CHORUS 5 HON Course Length: Year Course Level: 2 Credit: 1.0 Graduation

More information

Grade 4 General Music

Grade 4 General Music Grade 4 General Music Description Music integrates cognitive learning with the affective and psychomotor development of every child. This program is designed to include an active musicmaking approach to

More information

Chamber Orchestra Course Syllabus: Orchestra Advanced Joli Brooks, Jacksonville High School, Revised August 2016

Chamber Orchestra Course Syllabus: Orchestra Advanced Joli Brooks, Jacksonville High School, Revised August 2016 Course Overview Open to students who play the violin, viola, cello, or contrabass. Instruction builds on the knowledge and skills developed in Chamber Orchestra- Proficient. Students must register for

More information

Visual Arts, Music, Dance, and Theater Personal Curriculum

Visual Arts, Music, Dance, and Theater Personal Curriculum Standards, Benchmarks, and Grade Level Content Expectations Visual Arts, Music, Dance, and Theater Personal Curriculum KINDERGARTEN PERFORM ARTS EDUCATION - MUSIC Standard 1: ART.M.I.K.1 ART.M.I.K.2 ART.M.I.K.3

More information

Texas State Solo & Ensemble Contest. May 25 & May 27, Theory Test Cover Sheet

Texas State Solo & Ensemble Contest. May 25 & May 27, Theory Test Cover Sheet Texas State Solo & Ensemble Contest May 25 & May 27, 2013 Theory Test Cover Sheet Please PRINT and complete the following information: Student Name: Grade (2012-2013) Mailing Address: City: Zip Code: School:

More information

THEORY AND COMPOSITION (MTC)

THEORY AND COMPOSITION (MTC) Theory and Composition (MTC) 1 THEORY AND COMPOSITION (MTC) MTC 101. Composition I. 2 Credit Course covers elementary principles of composition; class performance of composition projects is also included.

More information

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation.

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation. Title of Unit: Choral Concert Performance Preparation Repertoire: Simple Gifts (Shaker Song). Adapted by Aaron Copland, Transcribed for Chorus by Irving Fine. Boosey & Hawkes, 1952. Level: NYSSMA Level

More information

Student Performance Q&A: 2001 AP Music Theory Free-Response Questions

Student Performance Q&A: 2001 AP Music Theory Free-Response Questions Student Performance Q&A: 2001 AP Music Theory Free-Response Questions The following comments are provided by the Chief Faculty Consultant, Joel Phillips, regarding the 2001 free-response questions for

More information

Curriculum Development In the Fairfield Public Schools FAIRFIELD PUBLIC SCHOOLS FAIRFIELD, CONNECTICUT MUSIC THEORY I

Curriculum Development In the Fairfield Public Schools FAIRFIELD PUBLIC SCHOOLS FAIRFIELD, CONNECTICUT MUSIC THEORY I Curriculum Development In the Fairfield Public Schools FAIRFIELD PUBLIC SCHOOLS FAIRFIELD, CONNECTICUT MUSIC THEORY I Board of Education Approved 04/24/2007 MUSIC THEORY I Statement of Purpose Music is

More information