About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

Similar documents
Modeling expressiveness in music performance

A Computational Model for Discriminating Music Performers

Modeling and Control of Expressiveness in Music Performance

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

Computer Coordination With Popular Music: A New Research Agenda 1

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

On the contextual appropriateness of performance rules

A prototype system for rule-based expressive modifications of audio recordings

Director Musices: The KTH Performance Rules System

Computational Models of Expressive Music Performance: The State of the Art

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

Measuring & Modeling Musical Expression

Importance of Note-Level Control in Automatic Music Performance

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

A Case Based Approach to the Generation of Musical Expression

Artificial Social Composition: A Multi-Agent System for Composing Music Performances by Emotional Communication

Expressive information

Ensemble Novice DISPOSITIONS. Skills: Collaboration. Flexibility. Goal Setting. Inquisitiveness. Openness and respect for the ideas and work of others

Machine Learning of Expressive Microtiming in Brazilian and Reggae Drumming Matt Wright (Music) and Edgar Berdahl (EE), CS229, 16 December 2005

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

DIGITAL AUDIO EMOTIONS - AN OVERVIEW OF COMPUTER ANALYSIS AND SYNTHESIS OF EMOTIONAL EXPRESSION IN MUSIC

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

AUTOMATIC EXECUTION OF EXPRESSIVE MUSIC PERFORMANCE

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Music Performance Panel: NICI / MMM Position Statement

ESP: Expression Synthesis Project

Real-Time Control of Music Performance

The Human Features of Music.

From quantitative empirï to musical performology: Experience in performance measurements and analyses

A case based approach to expressivity-aware tempo transformation

An action based metaphor for description of expression in music performance

HOW COOL IS BEBOP JAZZ? SPONTANEOUS

Structural Communication

STAT 113: Statistics and Society Ellen Gundlach, Purdue University. (Chapters refer to Moore and Notz, Statistics: Concepts and Controversies, 8e)

Towards a Computational Model of Musical Accompaniment: Disambiguation of Musical Analyses by Reference to Performance Data

An Interactive Case-Based Reasoning Approach for Generating Expressive Music

A Case Based Approach to Expressivity-aware Tempo Transformation

Automatic Construction of Synthetic Musical Instruments and Performers

SWING, SWING ONCE MORE: RELATING TIMING AND TEMPO IN EXPERT JAZZ DRUMMING

Perceptual dimensions of short audio clips and corresponding timbre features

Analysis, Synthesis, and Perception of Musical Sounds

Playing Mozart by Analogy: Learning Multi-level Timing and Dynamics Strategies

CS229 Project Report Polyphonic Piano Transcription

On music performance, theories, measurement and diversity 1

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation.

Music Representations

Quarterly Progress and Status Report. Musicians and nonmusicians sensitivity to differences in music performance

Machine Learning Term Project Write-up Creating Models of Performers of Chopin Mazurkas

Music Curriculum Kindergarten

Improving Piano Sight-Reading Skills of College Student. Chian yi Ang. Penn State University

Structure and Interpretation of Rhythm and Timing 1

Piano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15

A Beat Tracking System for Audio Signals

MTO 18.1 Examples: Ohriner, Grouping Hierarchy and Trajectories of Pacing

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

Chapter Five: The Elements of Music

Interacting with a Virtual Conductor

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

Connecticut State Department of Education Music Standards Middle School Grades 6-8

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

An interdisciplinary approach to audio effect classification

Music Alignment and Applications. Introduction

In Search of the Horowitz Factor

Music Genre Classification

Time Domain Simulations

Quarterly Progress and Status Report. Is the musical retard an allusion to physical motion?

Instrumental Performance Band 7. Fine Arts Curriculum Framework

Toward a Computationally-Enhanced Acoustic Grand Piano

Temporal dependencies in the expressive timing of classical piano performances

Outline. Why do we classify? Audio Classification

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education

A COMPARISON OF PERCEPTUAL RATINGS AND COMPUTED AUDIO FEATURES

BRAIN-ACTIVITY-DRIVEN REAL-TIME MUSIC EMOTIVE CONTROL

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

Music Curriculum Glossary

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES

Hidden Markov Model based dance recognition

CSC475 Music Information Retrieval

Towards a multi-layer architecture for multi-modal rendering of expressive actions

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Modeling memory for melodies

Experiments on gestures: walking, running, and hitting

gresearch Focus Cognitive Sciences

A structurally guided method for the decomposition of expression in music performance

MUSIC COURSE OF STUDY GRADES K-5 GRADE

Standard 1 PERFORMING MUSIC: Singing alone and with others

Deep Neural Networks Scanning for patterns (aka convolutional networks) Bhiksha Raj

> f. > œœœœ >œ œ œ œ œ œ œ

A PRELIMINARY COMPUTATIONAL MODEL OF IMMANENT ACCENT SALIENCE IN TONAL MUSIC

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

BRICK TOWNSHIP PUBLIC SCHOOLS (SUBJECT) CURRICULUM

Acoustic and musical foundations of the speech/song illusion

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Elements of Music. How can we tell music from other sounds?

Music Curriculum. Rationale. Grades 1 8

Improving Frame Based Automatic Laughter Detection

Proceedings of Meetings on Acoustics

Transcription:

Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About Giovanni De Poli Director of the Center of Computational Sonology (CSC) in University of Padova Research algorithms for sound synthesis and analysis models for expressiveness in music multimedia systems and humancomputer interaction preservation and restoration of audio documents Introduction Three Elements in Music Performance: Composer Instill, conveying messages Performer Communicate, expressive intentions Listener Receive, perceptual experience What is Model To evidence and abstract relations without irrelevant things To predict behaviors under certain constraints To compare observations

Development of Computational Models Groove system (Mathews&Moore, 1970) First music application of computer Real-time control Editing performer s actions KTH Model Developed at the Royal Institute of Technology in Stockholm Rule-based performance model Large number of varying parameters The GROOVE System at the Bell Telephone Labs, c1970 Two Kinds of Models Complete Model Explain all of the observed performance Complex model Poor insight Partial Model Explain at note level Small and robust rules Suitable for categorical decisions (ex, play fast or slower) Infromation Processing Model Mathematical model Described by variables and parameters Variables are divided into input and output variables Simulation Given input variables and predict output variables

Information processing Model Models describe the relationship between different kinds of output variables. The Layers of Information Physical information Timing and performer s movements Can be measured Symbolic information Scores, notes represented by common music notations Expressive information Affective and emotional content Expressive Contents Composer s messages Expressive intentions of performer Listener s perceptual experience Expression Communication Finding a correct interpretation of composer s message Adding the personal interpretation in the performance No mechanical performance, which is without prosodic inflexion

Expression Communication 3.1 Expression Communication Mozart Sonata K545 Emphasized with a decrescendo at the end of bar 2, 4, 10, 12 and 16. Performer: Ingrid Haebler Personal interpretation Emotional performance Kansei Expressive intentions Artistic intentions Bach, Goldberg Variations 1 Aria Glenn Gould 1955 (0:50) Glenn Gould 1981 (1:30) Tatiana Nikolayeva (1:25) 3.1 Expressive performance parameters Information about describing performance and observing the variations of performance Physical information level Keyboard timing of the musical events, tempo, dynamics, articulations, etc. Voice vibrato, intonation Timbre Basic parameters of MIDI protocol 3.1 Expressive performance parameters Problem: for some effects that can be rendered in different ways Emphasize a note by increasing loudness, by lengthening duration, by time shift, or by particular articulation or timbre modification Solution: Multi-level models First level what should be emphasize Second level how to emphasize

3.1 Expressive performance parameters Needing more research: Intermediate parameters, using the multi-level model intuitively Automatic extraction of the musical structure of a score Dimensional approach The valence-arousal space (Juslin, 2001) 3.2 Information Representation How model represents the information Time Performance-time, actual time that measured in performance Score-time, a phrase or a measure Models aim describe the relation between two things above Tempo Reciprocal duration as a function of scoretime Units: beats per minute (bpm) 3.2 Information Representation Mean tempo, average tempo over whole piece Main tempo, prevailing tempo Local tempo, a short-time measure, inverse of IOI (Repp, 1994; Gabrielsson. 1999) 3.2 Information Representation Discrete time representation Articulation of timing of individual notes Micropauses between melodic units Related to symbolic level Continuous time representation Vibrato of a note A crescendo curve Related to physical level

3.2 Information Representation Granularity Numerical values, time interval or IOI Categorical description, staccto vs. legato, shortening vs. lengthening Conclusion Models should have different time scales, such as note scale - attack time or vibrato, local scale - articulation of a melodic gesture, global scale - phrase crescendo 3.3 Expressive Deviations Communication between musician and listener Models of deviations explain where, how, and why a performer modified what is indicated by the notation in the score Not directly accessible but easily measurable 3.3 Expressive Deviations Reference Score theoretical and practical, but affects listener s judgment Intrinsic definitions of expression - defined in terms of performance itself (Gabrilelsson, 1974; Desain&Horing, 1991) Non-structural approaches relating expression to motion, emotion, etc. 3.3 Expressive Deviations - Example Expressive variations of the duration of beats Using bar duration as reference from the score Using this intrinsic definition to describe expression from the performance data itself Then taking global measurements as reference for local ones

3.3 Expressive Deviations - Example A performer plays a piece according to different expressive intentions Using neutral performance, the performance without any specific expressive intention, as a reference Using mean performance, the mathematical mean across different performances, as a reference 4.1 Model Structures Additivity Hypothesis Measuring deviations by principal component analysis (PCA) (Repp, 1992) PCA is a mathematical procedure that transform correlated variables into a smaller number of uncorrelated variables called principal components 1st principal component accounts for as much of the variability as possible, and succeeding component accounts for as much of the remaining variability as possible. 4.1 Model Structures The original data are a linear combination of few significant and independent variations around their mean values. Pros: easily interpretable Cons: over-simplifying and the interrelation of different aspects of performance is hidden 4.1 Model Structures Multiplying Nonlinear combination y=f(x 1, x 2,,x n ) (Bresin, 1998) Functional composition y=f[g(x)] (Honing, 1991) Hierarchical models the information is processed and combined at the proper level (KTH system, Bresin&Friberg, 2000) Local models acts at note level and try to explain the observed facts in a local context (Friberg, 1991; Widmer, 2002)

4.1 Model Structures Phrasing models take into account higher levels of the musical structure or more abstract expression pattern Composed models built by several components (models), each one for different sources of expression 4.2 Comparing Peformances Measure of distance the mean of the absolute differences Euclidean distance, square root of difference squares Maximum distance Conclusion: It is hard to achieve comparison. We don t have clear strategy of how to weight variables 4.3 Models for understanding Analysis-by-measurement Analysis-by-synthesis Machine learning Case-based reasoning 4.3.1 Analysis by measurements Analysis of deviations measured in recorded human performances Finding the regularities in the deviation patterns and describing them by means of a mathematical model (Gabrielsson, 1999)

4.3.1 Analysis by measurements Steps 1 - Selection of performances 2 - Measurement of the physical properties of every note 3 - Reliability control and classification of performances 4 - Selection and analysis of the most relevant variables 5 - Statistical analysis and development of mathematical interpretation models of the data 4.3.1 Analysis by measurements Approaches Statistical models Mathematical models Multidimensional analysis, e.g., Principal Component Analysis (PCA) (Repp, 1992) 4.3.1 Analysis by measurements Approximation of human performance Neural network techniques (Bresin, 1998) Fuzzy logic approach (Bresin et al., 1995a,b) Using multiple regression analysis algorithm (Ishikawa et al., 2000) Linear vector space theory (Zanon&De Poli, 2003a,b) Controlled experiment, manipulating one parameter in a performance (Desain et al.,2001) back 4.3.2 Analysis by synthesis Steps Steps 1-5 are the same as previous topic 6 - Synthesis of performance with systematic variations 7 - Judgment of synthesized versions, paying particular attention to the different experimental aspects selected 8 - Study of relation between performance and experimental variables 9 - Repetition of the procedure (steps 3-9) until the results converge

4.3.2 Analysis by synthesis Key point: only one variable is modified while imposing constant values to the others Example: KTH rule system The rule developed by De Poli et al., 1990 Dannenberg&Derenyi, 1998 4.3.2 Analysis by synthesis Every rule tries to predict some deviations of a human performance First, the rules are obtained based on professional musicians The performance produced by applying the rules are evaluated by listeners Then tuning and developing the rules Rules can be grouped into: Differential rules Grouping rules, e.g., Duration Contrast rule back 4.3.3 Machine learning Searching for and discovering complex dependencies in very large data sets, without any preliminary hypothesis (Widmer, 1995a,b, 1996, 2000, 2003, 2004) 4.3.4 Case-based reasoning (CBR) Using the knowledge of previously solved problem, and adaptations of solution to the actual problem The system learns from experience Example: SaxEx system (Arcos, 1998, 2001) Suzuki system (1999)

4.3.5 Expression recognition models To extract and recognize expression from a performance Example: Dannenberg (1997) - to classify improvisational performance style among different alternatives Friberg et al. (2002) recognized basic emotions in music performance Zanon & Widmer (2003, 2004) try to identify famous pianists based on their style of playing 4.4.1 Performance Synthesis Models Typical structure of a performance synthesis model 4.4.2 Discussion on Synthesis Models A recording of a classical music performance is just a reproduction of an event, not an experience of the music conceived at that time A real artistic value is necessary, no automatic performance can be acceptable except for the entertainment purpose Performance models has the application for teaching, helps student to know the performance strategies 4.4.3 Models for Multimedia Application Multimodal User interacts freely through movements and non-verbal communication with machine Most multimodal system is bimodal Human senses are not well represented in multimodal interfaces

4.5 Models for Artistic Creation 4.5 Models for Artistic Creation Scheme of music performance with digital instruments Electronic instrument performer controls the sound synthesis with gestures and suitable processes A performance model lies between the symbolic and the audio control level The performer receives an audio feedback from the instrument as with tranitional instruments Scheme of live electronic music performance The live electronics performer processes the sound produced by the instrument performer The live electronic box, merging score processes and gestures, controls the sound processing devices via a performance model The performer receives audio feedback from both the instrument and the sound processing Conclusion The knowledge gained in classical music performance studies and formalized in performance models The practical knowledge of new music creators in order to extract possible new performance models Music performance research is a joint development of art, science and technology