> f. > œœœœ >œ œ œ œ œ œ œ

Similar documents
Experiment on adjustment of piano performance to room acoustics: Analysis of performance coded into MIDI data.

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Assessment Schedule 2017 Music: Demonstrate knowledge of conventions used in music scores (91094)

Music Alignment and Applications. Introduction

Goebl, Pampalk, Widmer: Exploring Expressive Performance Trajectories. Werner Goebl, Elias Pampalk and Gerhard Widmer (2004) Introduction

Tempo and Beat Analysis

Chapter 40: MIDI Tool

The Staff as a Drawing

Music theory PART ONE

ΠΠΠΠe b ΠΠΠΠc

On the contextual appropriateness of performance rules


œ œ œ œ œ œ œ œ œ œ œ

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

Articulation * Catherine Schmidt-Jones. 1 What is Articulation? 2 Performing Articulations

6.5 Percussion scalograms and musical rhythm

Music Representations

From quantitative empirï to musical performology: Experience in performance measurements and analyses

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering

Maintaining skill across the life span: Magaloff s entire Chopin at age 77

APPENDIX. Divided Notes. A stroke through the stem of a note is used to divide that note into equal lesser values on the pitch or pitches given.

A Case Based Approach to the Generation of Musical Expression

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis

Golden Empire Drum & Bugle Corps 2018 High Brass Audition Packet

Measurement of overtone frequencies of a toy piano and perception of its pitch

OCTAVE C 3 D 3 E 3 F 3 G 3 A 3 B 3 C 4 D 4 E 4 F 4 G 4 A 4 B 4 C 5 D 5 E 5 F 5 G 5 A 5 B 5. Middle-C A-440

1. BACKGROUND AND AIMS

Hidden Markov Model based dance recognition

Maintaining skill across the life span: Magaloff s entire Chopin at age 77

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

Lesson One. Terms and Signs. Key Signature and Scale Review. Each major scale uses the same sharps or flats as its key signature.

œ œ Œ œ œ j œ œ œ œ j œ œ œ œ j œ œ œ œ œ œ œ œ j œ œ w œ œ Œ œ œ j œ œ œ œ J œ œ œ œ j œ œ Œ œ œ J œ œ œ œ j Œ œ œ j œ œ œ œ œ œ œ œ œ œ œ œ œ œ j w

Orchestration notes on Assignment 2 (woodwinds)

Analysis of local and global timing and pitch change in ordinary

Dimensions of Music *

Information Sheets for Proficiency Levels One through Five NAME: Information Sheets for Written Proficiency Levels One through Five

Marion BANDS STUDENT RESOURCE BOOK

1/7 Sheet music from Cornelius Gurlitt. The Rocking Horse. Opus 228, Vol. 1

Basic Music Principles (e-book edition)

Beginning Piano. A B C D E F G A B C D E F G... La Si Do Re... Notice that the letter C (Do) is always on the left side of 2 black keys.

Grade 1. Improve your theory! Paul Harris. Model answers

Chapter 2: Beat, Meter and Rhythm: Simple Meters

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

Music theory B-examination 1

A PRELIMINARY STUDY ON THE INFLUENCE OF ROOM ACOUSTICS ON PIANO PERFORMANCE

Using an Expressive Performance Template in a Music Conducting Interface

SONATA. for Violin and Piano, Op.26 LEO ORNSTEIN

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Sentiment Extraction in Music

Reference Manual. Using this Reference Manual...2. Edit Mode...2. Changing detailed operator settings...3

œ Æ œ. œ - œ > œ^ ? b b2 4 œ œ œ œ Section 1.9 D Y N A M I C S, A R T I C U L A T I O N S, S L U R S, T E M P O M A R K I N G S

Golden Empire Drum & Bugle Corps 2018 Low Brass Audition Packet

Using Musical Knowledge to Extract Expressive Performance. Information from Audio Recordings. Eric D. Scheirer. E15-401C Cambridge, MA 02140

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008

Instrumental Gestural Mapping Strategies as. Expressivity Determinants in Computer Music

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

Finger motion in piano performance: Touch and tempo

A microcomputer system for color video picture processing

Can the Computer Learn to Play Music Expressively? Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amhers

Vocal Music I. Fine Arts Curriculum Framework. Revised 2008

Quarterly Progress and Status Report. Is the musical retard an allusion to physical motion?

Department of Computer Science. Final Year Project Report

Transcription An Historical Overview

Measuring & Modeling Musical Expression

COMPUTATIONAL INVESTIGATIONS INTO BETWEEN-HAND SYNCHRONIZATION IN PIANO PLAYING: MAGALOFF S COMPLETE CHOPIN

THE OPERATION OF A CATHODE RAY TUBE

CHAPTER 6 DESIGN OF HIGH SPEED COUNTER USING PIPELINING

Assessment Schedule 2017 Music: Demonstrate knowledge of conventions in a range of music scores (91276)

Polyphonic Audio Matching for Score Following and Intelligent Audio Editors

Grade 2. Improve your theory! Paul Harris. Model answers

TEACHER S GUIDE to Lesson Book 2 REVISED EDITION

Name Class Hour. Fill in the chart below. Worksheet - 1. What does the top number of any time signature tell you?

Reading Music: Common Notation. By: Catherine Schmidt-Jones

Musical acoustic signals

Please fax your students rhythms from p.7 to us AT LEAST THREE DAYS BEFORE the video conference. Our fax number is

Music Representations. Beethoven, Bach, and Billions of Bytes. Music. Research Goals. Piano Roll Representation. Player Piano (1900)

SWING, SWING ONCE MORE: RELATING TIMING AND TEMPO IN EXPERT JAZZ DRUMMING

ECG SIGNAL COMPRESSION BASED ON FRACTALS AND RLE

Lecture 5: Frequency Musicians describe sustained, musical tones in terms of three quantities:

From Score to Performance: A Tutorial to Rubato Software Part I: Metro- and MeloRubette Part II: PerformanceRubette

2. ARTICULATION The pupil must be able to able to articulate evenly and clearly at a variety of slow to medium tempos and demonstrate a good posture


MUSIC IN TIME. Simple Meters

A cross-cultural comparison study of the production of simple rhythmic patterns

Phase I CURRICULUM MAP. Course/ Subject: ELEMENTARY GENERAL/VOCAL MUSIC Grade: 4 Teacher: ELEMENTARY VOCAL MUSIC TEACHER

Golden Empire Drum & Bugle Corps 2017 Low Brass Audition Packet

for the Shalom Choir There is no Rose SATB, djembe or dumbek, finger cymbals, tanpura (opt.) arr. Reginald Unterseher

Musical Bits And Pieces For Non-Musicians

SOA PIANO ENTRANCE AUDITIONS FOR 6 TH - 12 TH GRADE

Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach

Written Piano Music and Rhythm

TMEA ALL-STATE AUDITION SELECTIONS

Audio Compression Technology for Voice Transmission

THE OPERATION OF A CATHODE RAY TUBE

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )

Page I-ix / Lab Notebooks, Lab Reports, Graphs, Parts Per Thousand Information on Lab Notebooks, Lab Reports and Graphs

Simple motion control implementation

SMS Composer and SMS Conductor: Applications for Spectral Modeling Synthesis Composition and Performance

Transcription:

S EXTRACTED BY MULTIPLE PERFORMANCE DATA T.Hoshishiba and S.Horiguchi School of Information Science, Japan Advanced Institute of Science and Technology, Tatsunokuchi, Ishikawa, 923-12, JAPAN ABSTRACT In order to achieve human-like computer performances, the dierence features between real performances and musical scores must be investigated. It is also important to construct a performance system by taking account of performer's individuality. This paper addresses a method to derive a normative performance data from multiple performances and a strategy to extract performance rules by using each performer's data and normative data. To conrm normative performance rules extracted from multiple performances, automatic piano performances synthesized by the proposed method are compared with performers' data. The possibility of extracting individual traits of performers is also discussed by comparing synthesized performance data with each performance data. 1. INTRODUCTION With the current development of electronics, the sound quality of electronic music instrument has improved dramatically. To realize musical performance by computer, many automatic performance systems based on performance rules have been proposed. Yet, research on automatic performances focusing on performer's individuality, which is an important factor in expressiveness, has not been fully investigated. To create more human-like automatic performance system, we have to take into account the performer's individuality. This paper addresses a method for deriving an averaged or a normative performance data from the score and multiple performance data of experienced performers [1]. A method is proposed for extracting normative performance rules from normative performance data and the score data [2]. Furthermore, a strategy is discussed for obtaining individual traits of performance rules by comparing the normative performance data with the individual performance data. Computer piano performances synthesized by the proposed strategy are compared with individual performance data to conrm the eectiveness inducting the performer's individuality. 2. NORMATIVE PERFORMANCE DATA 2.1 The Average Onset Time and Duration The normative performance data is obtained from multiple performance data of experienced performers by using matching of each notes and calculating average of the attack time, dynamics and duration. Since the attack time and duration change greatly depending on the tempo of the performance, the following equation is used to normalize the tempo, relative to the total duration of the piece. btj = 1 n t ij S i 1 2 Ei Si n Ei Si (1) Here, n is the number of data points in the performance data, b t j is the j-th averaged onset time, Si, Ei, tij are start time, end time, and j-th onset time of the i-th performance data, respectively. 2.2 The Average Dynamics and Treatment of The Pedaling Since dynamics depend on the instrument and the performer, the arithmetic mean of standardized MIDI velocity of the piece is used. Let v ij be j-th velocity of the i-th performance data, i be the arithmetic mean,

i be the standard deviation of i-th performance data, vij be the standard normal distribution of v ij and v b j be the velocity of normative performance data. The j-th arithmetic mean of vij are given by the following expression, wj = 1 vij n ; v ij = v ij i ; v b j = w j m ; (2) i s where m and s are the arithmetic mean and the standard deviation of w j. To replay the normative performance bvj, the following expression is used to MIDI velocity, which ranges between and 127. bvj = v b j 1 b + b ; b = 1 n i ; b = 1 i (3) n Using the matching result between the onset time of the performed notes and the score notes, the pedaling positions of performance data are marked on the score by a linear interpolation. The normative pedal operation accepts a majority of decision of performance data that are transformed on the score notes. Also, if the pedalon time is shorter than minimum pedal-on time of all performance data, the pedal operation is not accepted. 2.3 Experimentation of a Normative Performance Data. A normative performance data was generated by using ve performance data of Chopin Op. 1/12 (Revolutionary Etude) published by Yamaha Music Media. The score published by Ewald Zimmermann (Henle's original edition) were prepared. Table 1 shows operation counts of performed notes and pedaling operations for ve performers, the normative performance and the score. Table 1 also includes the performance time and the pedal-on time. The table shows that while each pedaling count is between 15 and 29, the normative performance data has 175 by the majority of decision. The performance time and the pedal-on time dier between performers. The normative performance data expresses the averages of performers. Figure 1 shows the pedaling operations of the performance data by player A and the normative performance data for a part of the score. It is clear that the position of the pedaling operation diers widely between performers. The normative pedal operation based on the majority of decision does not express the mean value of pedaling operations, thus it may be unnatural. Table 1: Performance features of ve performers' data, the normative data and the score of Chopin Op. 1/12. Player Notes Pedaling Performance time (sec) Pedal-On time (sec) A 291 151 154. 15.8 (68.7%) B 28 196 143.5 11.9 (77.3%) C 273 15 133.2 94.8 (71.1%) D 287 181 159.5 115.3 (72.3%) E 286 29 148.7 99.3 (66.8%) Normative 279 175 147.4 17.5 (72.9%) Score 28 - - Figure 2 shows the velocity and local tempi wave form of the entire piece of the performance data by player A and as well as that of the normative performance data. As can be seen from these wave forms, the velocity and the tempo of the each performances are quite similar overall as is the shape of the normative performance data derived from them.

Allegro con fuoco h =76 & b b n ^ energico bcn œœœ > Ó œ œœ? b b bc fz legatissimo n œ > Ó Œ œ œ. œ œ œœœ > œ œœ Ó Ó Œ œ œ. œ œ > > cresc. f > œœœœ nœ f œœœ œœœœ n œ > œœœœ j œœœn œ > œœœœ >œ œ œ œ œ œ œ n œ > œœœœ. > œœœœ >œ œ œ œ œ œ œ Pedal ON OFF ON Player A Normative Pedal OFF 1 2 3 4 5 Figure 1: Pedal wave forms of the performance data of Chopin Op. 1/12. 3. RULE EXTRACTION FROM PERFORMANCE DATA 3.1 Rules Concerned with Velocity Keyboard and pedaling operations in piano performances have their own performance rules. In order to extract the performance rules, we investigate velocity that is dynamics of keyboard operations, and for tempo that is a timing of two operations. Many performance rules have been proposed for velocity. We analyze three parameters; \velocity change ratio with beat", \velocity change ratio with accent" and \velocity change ratio with note". Since dynamics depend on the instrument and the performer, the velocity is standardized by the piece for extracting velocity rules. Figure 3 shows the velocity change ratio of the normative data with beat (a semiquaver). The standardized velocity is on the vertical axis and the beat is on the horizontal axis. The solid line in Figure 3 shows the average value of each semiquaver. It is seen that all players emphasis the rst beat, the fourth beat, the second beat and the third beat in that order. The velocity between beats is decreased quickly, then slowly increased forward the next beat. This tendency is used as the velocity rule with beat. In order to obtain a velocity change ratio with accent, the average value of the velocity of the note with accent is calculated. To remove the side eect of \the velocity rule with beat", the velocity change ratio with accent is obtained by subtracting the average of each semiquaver from the normalized velocity data. This value is also used as the velocity rule with accent. Figure 4 shows the velocity change ratio with note for the normative performance data. The standardized velocity on the vertical axis is obtained by removing the side eects of two velocity rules. In order to analyze a velocity change ratio with note, we obtain the relation between velocity and note pitch by principal component analysis. The solid line in Figure 4 show the correlation ratio. The correlation ratio between the velocity and the note is used the individual parameters of the velocity rules.

Velocity 12 8 4 12 8 4 25 2 15 1 2 15 1 5 Player A Normative Velocity 5 1 15 2 25 3 35 4 45 5 55 6 65 7 75 8 Player A Normative 5 1 15 2 25 3 35 4 45 5 55 6 65 7 75 8 Figure 2: Velocity and tempo wave forms of the performance data of Chopin Op. 1/12. 3.2 Rules Concerned with For the performance rules of tempo, we pay attention on two parameters; \tempo change ratio with beat" and \tempo change ratio with slur". The standardized values of the tempo are analyzed to obtain individual parameters for rules concerned with tempo. Figure 5 shows the tempo change ratio with beat for the normative performance data. The vertical axis corresponds to the standardized tempo and the beat is on the horizontal axis. It is seen that all players perform quickly at the second beat and perform slowly the fourth beat. These values show the individuality of performer and are used as parameters for the tempo rule with beat. It has been reported that the tempo increases until the center of phrase and decreases after. Since phrase and slur are closely related, we investigate the relationship between tempo change ratio and slur. Figure 6 shows the relation between tempo change ratio and dierence in note pitch between starting and ending of slur. The line in the gure is obtained by method of least squares. This value is used as a parameter for the tempo rule with slur. 4. AUTOMATIC PERFORMANCE SYSTEM BY PERFORMANCE RULES 4.1 Automatic Performances The normative performance rules obtained by analyzing the normative performance data and the score data are applied to automatic performance system. Automatic performances by computer are generated as follows. First, the velocity data of all notes are set to. The velocity rules, \velocity change ratio with beat", \velocity change ratio with accent" and \velocity change ratio with note", are applied to the velocity data. Then the velocity data are standardized and converted to MIDI velocity by using the average and the standard deviation

4 3 4 3 Standardized Velocity 2 1-1 -2 Standardized Velocity 2 1-1 -2-3 -3-4 1 2 3 4 Beat -4 12 24 36 48 6 72 84 96 18 12 Note Figure 3: Velocity change ratio with beat Figure 4: Velocity change ratio with note of the velocity. Second, the tempo data of all notes are set to 1. The tempo rules, \tempo change ratio with beat" and \tempo change ratio with slur", are applied to the tempo data. After that, the tempo data are standardized and converted to MIDI tempo by using the average and the standard deviation of the tempo. 4.2 Comparison between Normative and Automatic Performances Figure 7 shows the velocity and the tempo wave form of the automatic performance data and the normative performance data. The solid line shows automatic performance data and the the dotted line shows the normative performance data. For the dynamics, the wave form of the computer performance are locally almost similar to the original performance data, but the dierences between both data are still observed. Therefore, the musical small uctuations are reappeared, but the large uctuations are not reappeared. The large uctuations should be reappeared by applying of the rules that aect widely, e.g. crescendo and diminuendo. 1.2.5 1.1.4 1.3.2.9.1.8 1 2 3 4 Beat 12 24 36 48 6 Difference in note pitch Figure 5: change ratio with beat Figure 6: change ratio with slur

For the tempo, it is seen that the wave form of the computer performance are almost similar to the original performance data. Although only two performance rules concerned with tempo, the good automatic performances are obtained by the proposed strategy. 12 Normative performance Automatic performance Velocity 8 4 25 2 15 1 5 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 Normative performance Automatic performance 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 Figure 7: Velocity and tempo wave form of the computer performance 5. CONCLUSIONS AND FUTURE PROBLEMS This paper showed the method to obtain normative performance data from performance data created by multiple performances. The strategy has been proposed for extracting the performance rules from performance data. The strategy includes only three rules concerned with velocity and two rules concerned with tempo. The normative performance rules are obtained by analyzing the normative performance data and are applied to automatic performance data by computer. It conrmed that the computer performances are almost similar to the original performance data. A method to extract individual features is one of further subjects to be studied. Acknowledgment: Authors thank Professor I. Fujinaga, Johns Hopkins University for variable comments. A part of the research is supported by the Ground-in-Aids for Scientic Research (No.987877), Ministry of Education of Japan. 6. REFERENCES [1] T HOSHISHIBA, S HORIGUCHI & I FUJINAGA, `Computer Performance of Piano Music with Normative Performance Data', JAIST Research Report, IS{RR{95{14I (1995) [2] T HOSHISHIBA, S HORIGUCHI & I FUJINAGA, `Study of Expression and Individuality in Music Performance Using Normative Data Derived from MIDI Recordings of Piano Music', International Conference on Music Perception and Cognition, pp.465{47 (1996)