Quarterly Progress and Status Report. Musicians and nonmusicians sensitivity to differences in music performance

Similar documents
Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

Director Musices: The KTH Performance Rules System

Quarterly Progress and Status Report. Replicability and accuracy of pitch patterns in professional singers

Quarterly Progress and Status Report. Music communication as studied by means of performance

Quarterly Progress and Status Report. Matching the rule parameters of PHRASE ARCH to performances of Träumerei : a preliminary study

Quarterly Progress and Status Report. Is the musical retard an allusion to physical motion?

Quarterly Progress and Status Report. Formant frequency tuning in singing

A prototype system for rule-based expressive modifications of audio recordings

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

A Computational Model for Discriminating Music Performers

The Tone Height of Multiharmonic Sounds. Introduction

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

A PRELIMINARY COMPUTATIONAL MODEL OF IMMANENT ACCENT SALIENCE IN TONAL MUSIC

Measurement of overtone frequencies of a toy piano and perception of its pitch

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

Chapter 40: MIDI Tool

The influence of musical context on tempo rubato. Renee Timmers, Richard Ashley, Peter Desain, Hank Heijink

Computer Coordination With Popular Music: A New Research Agenda 1

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

AN ON-THE-FLY MANDARIN SINGING VOICE SYNTHESIS SYSTEM

Pitch correction on the human voice

Quarterly Progress and Status Report. Intonation preferences for major thirds with non-beating ensemble sounds

Making music with voice. Distinguished lecture, CIRMMT Jan 2009, Copyright Johan Sundberg

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

Real-Time Control of Music Performance

Quarterly Progress and Status Report. A singer s expression of emotions in sung performance

GRATTON, Hector CHANSON ECOSSAISE. Instrumentation: Violin, piano. Duration: 2'30" Publisher: Berandol Music. Level: Difficult

Measuring & Modeling Musical Expression

Perceptual Considerations in Designing and Fitting Hearing Aids for Music Published on Friday, 14 March :01

Outline. Why do we classify? Audio Classification

Artificial Social Composition: A Multi-Agent System for Composing Music Performances by Emotional Communication

A Case Based Approach to the Generation of Musical Expression

Speech and music performance: Parallels and contrasts

Banff Sketches. for MIDI piano and interactive music system Robert Rowe

Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach

Chapter Two: Long-Term Memory for Timbre

Music Theory: A Very Brief Introduction

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra

On the contextual appropriateness of performance rules

EIGHT SHORT MATHEMATICAL COMPOSITIONS CONSTRUCTED BY SIMILARITY

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Music Representations

Modeling memory for melodies

Quarterly Progress and Status Report. X-ray study of articulation and formant frequencies in two female singers

Machine Learning of Expressive Microtiming in Brazilian and Reggae Drumming Matt Wright (Music) and Edgar Berdahl (EE), CS229, 16 December 2005

CHILDREN S CONCEPTUALISATION OF MUSIC

Proceedings of the 7th WSEAS International Conference on Acoustics & Music: Theory & Applications, Cavtat, Croatia, June 13-15, 2006 (pp54-59)

Importance of Note-Level Control in Automatic Music Performance

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series

JEFFERSON COLLEGE. Course Syllabus MSC105. Introduction to Music Technology. 1 Credit Hour. Prepared by: Joe Pappas, Adjunct Music Faculty

Boulez. Aspects of Pli Selon Pli. Glen Halls All Rights Reserved.

Programming by Playing and Approaches for Expressive Robot Performances

MEASURING LOUDNESS OF LONG AND SHORT TONES USING MAGNITUDE ESTIMATION

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Modeling perceived relationships between melody, harmony, and key

An Interactive Case-Based Reasoning Approach for Generating Expressive Music

LESSON 1 PITCH NOTATION AND INTERVALS

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

Doubletalk Detection

Construction of a harmonic phrase

Structural Communication

Influence of tonal context and timbral variation on perception of pitch

On music performance, theories, measurement and diversity 1

Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network

Sensory Versus Cognitive Components in Harmonic Priming

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

CPU Bach: An Automatic Chorale Harmonization System

Quarterly Progress and Status Report. Editor and search programs for music

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

Music Performance Panel: NICI / MMM Position Statement

Robert Alexandru Dobre, Cristian Negrescu

How to Obtain a Good Stereo Sound Stage in Cars

CS229 Project Report Polyphonic Piano Transcription

Music 175: Pitch II. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) June 2, 2015

Section IV: Ensemble Sound Concepts IV - 1

2. Measurements of the sound levels of CMs as well as those of the programs

Subjective evaluation of common singing skills using the rank ordering method

Classification of Different Indian Songs Based on Fractal Analysis

Musical Acoustics Lecture 16 Interval, Scales, Tuning and Temperament - I

Speaking in Minor and Major Keys

Week 14 Music Understanding and Classification

NAA ENHANCING THE QUALITY OF MARKING PROJECT: THE EFFECT OF SAMPLE SIZE ON INCREASED PRECISION IN DETECTING ERRANT MARKING

THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS

From quantitative empirï to musical performology: Experience in performance measurements and analyses

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

Received 27 July ; Perturbations of Synthetic Orchestral Wind-Instrument

Analysis of local and global timing and pitch change in ordinary

WESTFIELD PUBLIC SCHOOLS Westfield, New Jersey

Variations on a Theme by Chopin: Relations Between Perception and Production of Timing in Music

Melody transcription for interactive applications

Consonance perception of complex-tone dyads and chords

CHORDAL-TONE DOUBLING AND THE ENHANCEMENT OF KEY PERCEPTION

Transcription:

Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Musicians and nonmusicians sensitivity to differences in music performance Sundberg, J. and Friberg, A. and Frydén, L. journal: STL-QPSR volume: 29 number: 4 year: 1988 pages: 077-081 http://www.speech.kth.se/qpsr

B. MUSICIANS' AND NONMUSICIANS' SENSITIVI- TY TO DIFFERENCES IN MUSIC PER- FORMANCE Johan Sundberg, Anders Friberg & Lars Frydkn* Abstract A set of ordered context-dependent rules for the automatic transformation of a music score to the corresponding musical performance has been developed, using an analysis-by-synthesis method [Sundberg, J. (1987): "Computer synthesis of music performance," pp. 52-69 in (J. Sloboda, ed.) Generative Processes in Music, Clarendon, Oxford]. The rules are implemented in the LeLisp language on a Macintosh microcomputer that controls a synthesizer via a MIDI interface. The rules manipulate sound level, fundamental frequency, vibrato extent, and duration of the tones. The present experiment was carried out in order to find out if the sensitivity of these effects differed between musicians and nonrnusicians. Pairs of performances of the same examples were presented in different series, one for each rule. Between the pairs in a series, the performance differences were varied within wide limits and, in the first pair in each series, the difference was pat, so as to catch the subject's attention. Subjects were asked to decide whether the two performances were identical. The results showed that musicians had a clearly greater sensitivity. The pedagogical implications of this finding will be discussed. Introduction When performing music, musicians do not accurately replicate the nominal description offered by the music score. Presumably for the purpose of musical expression, they make a number of deviations in terms of amplitude, frequency, and duration. We have analyzed these expressive variations in music using an analysis-by-synthesis approach (see, e.g., Sundberg, Askenfelt, & FrydCn, 1983; Sundberg, 1988). The input is the music score and the output is a performance of this score generated on a synthesizer controlled over MIDI by a Macintosh 11 microcomputer. The control program contains a set of ordered, context dependent rules reflecting comments and recommendations that co-author Lars Frydkn has made when listening to computer generated performances. The rules affect duration. amplitude. fine tuning, and vibrato depth of the tones. They have been tested by carefill listening, by formal listening panels (Thompson, Friberg, FrydCn, & Sundberg, 1986; Friberg, FryclCn, Botlin, & Sundberg, 1987) and, in some cases, also by comparison between rule generated and actual performances. An interesting aspect of the performance rules is the magnitude of the effects induced. We will call this the rule quantity. The purpose of the present investigation was to find out what rule quantity is needed for various rules in order to evoke a perceptible effect in trained and untrained listeners.

Rules Out of a total of about 15 rules, seven were selected for this test: 1. Marking melodic charge is a rule that adds duration, sound level, and vibrato depth to tones depending on how remarkable they are in the harmonic context. Melodic charge is a quantitative estimate of the remarkableness. It is derived from the circle of fifths, and it shows a relationship with the listeners' expectancies, according to experiments carried out by Krumhansl and collaborators (Krurnhansl, 1987). 2. The shorter, the softer deemphasizes short notes by reducing their sound level. 3. The shorter, the shorter increases the contrast between durational categories by shortening short notes. 4. The higher, the higher stretches the tuning. 5. Articulation of leaps lengthens the target note and shortens the start note in a singular leaps. 6. Harmony-dependent crescendos and decrescendos are achieved by means of the harmonic charge, a quantitative estimate of the remarkableness of a chord given its harmonic context. It is derived from the chord notes' melodic charges and shows a relationship with listeners' expectancies, according to experiments carried out by Krurnhansl and collaborators. The sound level is increased when remarkable chords are approaching and vice versa. The sound level increments thus distributed are complemented by increments in duration and vibrato extent. 7. Marking of structure lengthens the last note of phases and insert. a micropause after the last note in subphrases. Experiment The basic idea was to ask subjects to listen for differences in two more or less differing performances of the same music example. These examples were chosen so as to clearly demonstrate the effect of the rule. The performances were presented in pairs in which the first version was generated by applying one of the performance rules in a quantity, that varied between the pairs, while the second version was always a dead-pan standard. The performance differences between the pairs were varied in steps from huge to zero. Each rule was tested in a series of nine such pairs, presented in succession, and two of these were duplicates. In each series, the first pair presented a highly exaggerated rule quantity so as to direct the subjects' attention to what to listen for. Also, prior to the presentation of a new series, the experimenter instructed the subjects what they shoultl be listening for. The other quantities occurred in quasi random order in the series. The entire test took about 45 min. The examples were generated on a Macintosh Plus microcomputer in which the rules had been implemented in LeLisp language. The computer played a YAMAHA FBOl synthesizer using a flute like sound. The subjects were ten professional top level music students from the Edsberg Music Conservatory in Stockholm, and twelve nonrnusician participants of a voice seminar. All subjects in each group listened simultaneously to the stimuli over loudspeakers at a comfortable listening level in a lecturing room. They gave their answers on anonymous sheets.

Results Table I presents the responses given for the zero difference pairs. The result shows clearly that many subjects thought that they could hear a difference in the performance even though the performances were actually identical. In the case of the musicians, this result would reflect a somewhat exaggerated eagerness to detect even the finest differences, thus exhibiting an excellent musical ear. In the case of the nonrnusicians, the result would rather indicate a tendency to guessing, as we will see below. Table I. Rule musicians nonmusicians Marking of melodic charge 20 92 Marking of harmonic charge 70 92 Increased duration contrasts 60 50 Marking of phrase and subphrase 80 92 Stretched tuning Leap articulation Short notes softer 80 83 Percentage of "Samen-answers received fiom musicians and nonmusicians in the no rule case, i.e., when there was no difference between the two performances. Fig. 1 shows the responses for the various rules. Each rule is represented as one panel. The percentage of "same" responses is plotted on the ordinate, and the rule quantity on the abscissa. The quantities have been normalized such that 1 represents the quantity to which the rule is normally set in our performance rule system. Some duplicate stimuli were included in the test at quantities near 1. With one exception, the musicians responded similarly to both these stimuli. Probably, the exception was due to some effect of the order of stimulus presentation. By and large, the musicians' scores show the expected general pattern. With few exceptions, the percentage of "same" votes decreases gradually as the quantity increases. A good percentage of the musicians reported that they heard differences even for the smallest rule quantity. This does not imply that the smallest quantity was still too great to be inaudible, as these subjects reported hearing differences even between identical performances. All curves reach zero percentage of "same" responses when the quantity was large, not only for the largest quantity which was always presented first in the series, but also for the nearest smaller quantities in most cases. These observations suggest that the responses represent reliable infomlation. at least regarding the cases of great performance differences. The quantity of 1 is the value that we have found appropriate in our performance rule system, as mentioned. In most cases, this quantity is near the smallest one that still produces a perceptible effect for most subjects. The results obtained from the nonmusicians are quite different. For the rules shown in the upper series of panels, the nonmusicians' responses are higher but roughly parallel to those of the musicians. This suggests that the naive listeners could notice the effects of these rules only when presented at a greater quantity.

For the three rules in the lower series of panels, the overall descent of the nonmusicians' curves is less obvious, the responses remaining in the vicinity of 50% at nearly all quantities. Thus, many subjects failed to notice even the largest effects, so that, apparently, they were merely guessing throughout the series. This suggests that musicians hear aspects of music performance that remain unnoticed by many nonmusicians. Discussion and conclusions These results indicate that musicians are more skilled in noticing the performance differences induced by our rules. This suggests that music listening has a training effect in this regard. If the musicians had not shown a greater skill in this regard, one inevitably would have questioned the musical relevance of the effects. The result therefore supports the idea that the rules generate effects that are relevant to music listening. Moreover, it suggests the possibility that this computer program can be successfully used for training musical listening. Seemingly the findings suggest that musicians should exaggerate musical expression when playing for nonmusicians. A more convincing interpretation is that the nonrnusician is likely to detect more and more meaningful details in a good performance each time he listens to a recording of it. Acknowledgments The kind cooperation of the subjects is gratefully acknowledged. This is a revised version of the authors' paper at the 116th meeting of the Acoustical Society of America, November 1988. References Friberg, A., FrydCn, L., Bodin, L-G., & Sundberg, J. (1987): "Performance rules for computer controlled performance of contemporary keyboard music," STL-QPSR 411987, pp. 79-85; a revised version will appear in Contemporary Music Review 1989. Krumhansl, C. (1987): "Tonal and harmonic hierarchies," pp. 13-32 in (J. Sundberg, ed.) Harmony and Tonality, Publ. No 54 issued by the Royal Swedish Academy of Music, Stockholm. Sundberg, J. (1988): "Computer synthesis of music performance", pp. 52-69 in (J. Sloboda, ed.) Generative Processes in Music, Clarendon Press, Oxford. Sundberg, J., Askenfelt, A., & FrydCn, L. (1983): "Musical Performance: A synthesis-by-rule approach," Computer Music Journal 7, pp. 37-43. Thompson, W.F., Friberg, A., FrydCn, L., & Sundberg (1986): "Evaluating rules for the synthetic performance of melodies,"stl-qpsr 2-3/1986, pp. 27-44; a revised version will appear in Psychology of Music, 1989.

MARKING OF MELODIC CHARGE MARKING OF HARMONIC CHARGE INCREASED DURATION CONTRASTS MARKING 01: PHRASE AND SUUPHRASE 100 - BOUNDARIES 1 I I 80 - p\ a, 0 / o \ \4/ \ 60 - \ 0 \ - r,o - - b ' 0 NORMALIZED PERTURBATION QUANTITY NORMALIZED PERTURBATION QUANTITY STRETCHED TUNING LEAP ARTICULATION SHORT NOTES SOmER Fig. 1. NORMALIZED PERTURBATION QUANTITY Percentage of "Samen-answers received from musicians (filled circles) and nonmusicians (open circles) in the listening test as function of the physical difference between the two performances compared. The quantity has been normalized with respect to the quantity normally used in the pe@ormance. For more details, see text.