Importance of Note-Level Control in Automatic Music Performance

Similar documents
Director Musices: The KTH Performance Rules System

A Computational Model for Discriminating Music Performers

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

Experiments on gestures: walking, running, and hitting

Real-Time Control of Music Performance

Artificial Social Composition: A Multi-Agent System for Composing Music Performances by Emotional Communication

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

A prototype system for rule-based expressive modifications of audio recordings

Structural Communication

Measuring & Modeling Musical Expression

On the contextual appropriateness of performance rules

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

From quantitative empirï to musical performology: Experience in performance measurements and analyses

Quarterly Progress and Status Report. Musicians and nonmusicians sensitivity to differences in music performance

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

Sofia Dahl Cognitive and Systematic Musicology Lab, School of Music. Looking at movement gesture Examples from drumming and percussion Sofia Dahl

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Quarterly Progress and Status Report. Matching the rule parameters of PHRASE ARCH to performances of Träumerei : a preliminary study

Programming by Playing and Approaches for Expressive Robot Performances

Instrumental Music III. Fine Arts Curriculum Framework. Revised 2008

Making music with voice. Distinguished lecture, CIRMMT Jan 2009, Copyright Johan Sundberg

DIGITAL AUDIO EMOTIONS - AN OVERVIEW OF COMPUTER ANALYSIS AND SYNTHESIS OF EMOTIONAL EXPRESSION IN MUSIC

Finger motion in piano performance: Touch and tempo

Computer Coordination With Popular Music: A New Research Agenda 1

Instrumental Music II. Fine Arts Curriculum Framework

Guide to Computing for Expressive Music Performance

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music II. Fine Arts Curriculum Framework. Revised 2008

Synthetic and Pseudo-Synthetic Music Performances: An Evaluation

Unit Outcome Assessment Standards 1.1 & 1.3

Marion BANDS STUDENT RESOURCE BOOK

Music Representations

v end for the final velocity and tempo value, respectively. A listening experiment was carried out INTRODUCTION

A Case Based Approach to the Generation of Musical Expression

Zooming into saxophone performance: Tongue and finger coordination

Music theory B-examination 1

Music Performance Panel: NICI / MMM Position Statement

Computational Models of Expressive Music Performance: The State of the Art

Quarterly Progress and Status Report. Expressiveness of a marimba player s body movements

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

Cadet Music Theory Workbook. Level One

LESSON 1 PITCH NOTATION AND INTERVALS

Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach

GRATTON, Hector CHANSON ECOSSAISE. Instrumentation: Violin, piano. Duration: 2'30" Publisher: Berandol Music. Level: Difficult

A PRELIMINARY COMPUTATIONAL MODEL OF IMMANENT ACCENT SALIENCE IN TONAL MUSIC

Timing variations in music performance: Musical communication, perceptual compensation, and/or motor control?

An Interpretive Analysis Of Mozart's Sonata #6

Modeling and Control of Expressiveness in Music Performance

Playing Mozart by Analogy: Learning Multi-level Timing and Dynamics Strategies

Human Preferences for Tempo Smoothness

WHO IS WHO IN THE END? RECOGNIZING PIANISTS BY THEIR FINAL RITARDANDI

Visual perception of expressiveness in musicians body movements.

ESP: Expression Synthesis Project

Toward a Computationally-Enhanced Acoustic Grand Piano

Lesson One. Terms and Signs. Key Signature and Scale Review. Each major scale uses the same sharps or flats as its key signature.

MUSIC ACOUSTICS. TMH/KTH Annual Report 2001

Music Fundamentals. All the Technical Stuff

Musical Bits And Pieces For Non-Musicians

ST. JOHN S EVANGELICAL LUTHERAN SCHOOL Curriculum in Music. Ephesians 5:19-20

Temporal dependencies in the expressive timing of classical piano performances

Vocal Music I. Fine Arts Curriculum Framework. Revised 2008

Experiment on adjustment of piano performance to room acoustics: Analysis of performance coded into MIDI data.

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions

Information Sheets for Proficiency Levels One through Five NAME: Information Sheets for Written Proficiency Levels One through Five

Standard 1: Singing, alone and with others, a varied repertoire of music

Quarterly Progress and Status Report. Replicability and accuracy of pitch patterns in professional singers

ASD JHS CHOIR ADVANCED TERMS & SYMBOLS ADVANCED STUDY GUIDE Level 1 Be Able To Hear And Sing:

MELODIC NOTATION UNIT TWO

INTERMEDIATE STUDY GUIDE

INSTLISTENER: AN EXPRESSIVE PARAMETER ESTIMATION SYSTEM IMITATING HUMAN PERFORMANCES OF MONOPHONIC MUSICAL INSTRUMENTS

BRAIN-ACTIVITY-DRIVEN REAL-TIME MUSIC EMOTIVE CONTROL

WSMTA Music Literacy Program Curriculum Guide modified for STRINGS

On music performance, theories, measurement and diversity 1

Widmer et al.: YQX Plays Chopin 12/03/2012. Contents. IntroducAon Expressive Music Performance How YQX Works Results

AUTOMATIC EXECUTION OF EXPRESSIVE MUSIC PERFORMANCE

Expressive Articulation for Synthetic Music Performances

The influence of musical context on tempo rubato. Renee Timmers, Richard Ashley, Peter Desain, Hank Heijink

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation.

Montana Instructional Alignment HPS Critical Competencies Music Grade 3

Instrumental Performance Band 7. Fine Arts Curriculum Framework

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

Striking movements: Movement strategies and expression in percussive playing

Quarterly Progress and Status Report. Formant frequency tuning in singing

Interacting with a Virtual Conductor

"#$%&''()&!*+'(,! -&%./%012,&!34'5&0!

Analysis of local and global timing and pitch change in ordinary


BRAY, KENNETH and PAUL GREEN (arrangers) UN CANADIEN ERRANT Musical Features of the Repertoire Technical Challenges of the Clarinet Part

Exploring Piano Masterworks 3

L van Beethoven: 1st Movement from Piano Sonata no. 8 in C minor Pathétique (for component 3: Appraising)

TEMPO AND BEAT are well-defined concepts in the PERCEPTUAL SMOOTHNESS OF TEMPO IN EXPRESSIVELY PERFORMED MUSIC

NCEA Level 2 Music (91275) 2012 page 1 of 6. Assessment Schedule 2012 Music: Demonstrate aural understanding through written representation (91275)

Chapter 13. Key Terms. The Symphony. II Slow Movement. I Opening Movement. Movements of the Symphony. The Symphony

METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING

ESTIMATING THE ERROR DISTRIBUTION OF A TAP SEQUENCE WITHOUT GROUND TRUTH 1

Acoustic and musical foundations of the speech/song illusion

Music Theory. Level 1 Level 1. Printable Music Theory Books. A Fun Way to Learn Music Theory. Student s Name: Class:

Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016

Transcription:

Importance of Note-Level Control in Automatic Music Performance Roberto Bresin Department of Speech, Music and Hearing Royal Institute of Technology - KTH, Stockholm email: Roberto.Bresin@speech.kth.se Abstract A summary is presented of recent studies on articulation in piano music performance together with the applications they originated. Emphasis is given to legato and staccato articulation. Results from measurements of performances recorded with MIDIfied grand pianos are summarized. Some rules for the simulation of articulation are presented and their application in renderings of emotional expressive performances is discussed. These rules can produce effects that reflect tempo indications in the score as well as the expressive intentions of the player. Also discussed is how the articulation rules can be applied to the control of sound synthesis algorithms and why music performance research is important to a wider range of applications connected to perception and human behavior. 1 Introduction Since the beginning of the computer era, researchers have tried to replicate human behavior with machines. Humanoids that can walk (Pandy and Anderson, 2) and dance (Lim, Ishii and Takanishi, 1999) have been designed as well as those that can play the piano, talk, listen and answer (Kato, 1987). Still, these machines lack the ability to understand and process the emotional states of real humans and to develop and synthesize an emotional state and personality of their own. To overcome this limitation research on music performance seems particularly promising, since music is a universal communication medium, at least within a given cultural context. Research on music performance represents a promising starting point for understanding human behavior. What scientific research on music performance mainly seeks is the understanding of underlying principles accounting for variations from score indications as applied by musicians in their performances. In recent years, research on piano music performance has devoted an increased attention to the study of duration variation at note-level such as in articulation (Repp, 1995; Repp, 1997; Woody, 1997; Repp, 1998). In legato and staccato articulation, these deviations are responsible of the acoustical overlap or gap between adjacent tones. It has been found that players vary their articulation strategies when rendering different expressive intentions (Bresin and Battel, 2; Bresin and Widmer, 2); staccato articulation will be more pronounced in an allegro tempo than in an adagio tempo. Recently it has been demonstrated that articulation is an important cue in the emotional coloring of performances, and therefore it is important to discriminate between different levels of legato and staccato articulation. Staccato articulation, in the performance of a score that did not include staccato marks, helps in communicating happiness to listeners. A performance with exaggerated legato articulation can be perceived as sad (Gabrielsson and Juslin, 1996). The importance of articulation is also demonstrated by the care dedicated to the technique of articulating notes in the practicing of musical instruments. Investigations in this direction have been demonstrated to be useful in a wide range of applications including automatic music performance and models for the control of sound synthesis algorithms. The problem of articulating sounds produced with synthesis algorithms is well known: these algorithms are usually very good in synthesizing one isolated tone but insufficient to interconnect sounds in an acoustical and musical realistic way (Mathews, 19). In the following paragraphs, an overview of recent research results related to articulation in piano music performance is presented. Some applications, and future directions are also discussed. 2 Legato and Staccato Articulation Before proceeding to the presentation of recent research results on articulation, explanation of some terms is needed. To measure the degree of legato articulation Repp (1995) introduced the definition Key Overlap Time (KOT) for adjacent tones. It is defined as the time interval between the key depression for the following tone and the key release for the current tone (see Figure 1). Bresin introduced later the term Key Overlap Ratio (KOR), defined as the ration between KOT and inter-onset-interval (IOI) for the n- th note. In analogy with KOT and KDT for legato articulation, Bresin introduced the terms Key Detached Time (KDT) and Key Detached Ratio (KDR) for staccato articulation (see Figure 1) (Bresin and Battel, 2).

2 15 1 5 Dark Soft Light Brilliant Hard Mean KOR (%) Heavy Adjective Figure 1. (a) Definition of inter-onset-interval (IOI n ), duration (DR n ) and key overlap time (KOT n ) for TONE n followed by an overlapping TONE n+1. (b) Definition of inter-onset-interval (IOI n ), duration (DR n ) and key detached time (KDT n ) for TONE n followed by a nonoverlapping TONE n+1. In the following paragraphs the main results from two studies on expressive articulation are presented. In both experiments players performed on computer-controlled pianos. In a first experiment differences in the amount of legato and staccato articulation as performed by five pianists were analyzed (Bresin and Battel, 2). The pianists, two female and three male, were Italian students in their diploma and pre-diploma year of piano classes at the Conservatory of Music Benedetto Marcello in Venice. They performed nine times the first sixteen bars of the Andante movement of W A Mozart s Piano Sonata in G major, K 545. Each time the pianists adopted a different expressive intention: natural, brilliant, dark, heavy, light, hard, soft, passionate, and flat, natural representing the players preferred rendering in the absence of a specific emotional coloring, i.e. musically natural. The pianists performed on a Yamaha Disklavier II grand piano, C3 model. The articulation performed by the right hand was analyzed only for those notes that were marked staccato and legato in the original score by Mozart. Interesting results emerged from the analysis of the recorded data. Even if there were some large differences between the five pianists performances, they generally used similar strategies in their different renderings. In particular, legato was played with a KOT that depended on the IOI of the first of two overlapping tones: longer notes were performed with relatively shorter KOT than shorter notes. Performances intended as flat had the lowest average KOR while performances communicating heaviness were performed with the average largest KOR (see Figure 2). The natural performance (white bar in Figure 2) was played by the five pianists with an average Figure 2. Mean KOR, averaged over five pianists, for each adjective corresponding to different expressive intentions. 5 Heavy Dark Soft Hard Adjective Figure 3. Mean KDR, averaged over five pianists, for each adjective corresponding to different expressive intentions. KOR of about 15%, which gave it the middle position in a rank order according to KOR. Staccato tones had an overall average duration of about 4% of the IOI. Staccato ranged from mezzo-staccato in the heavy version, with KOT of about 7% of the IOI, to staccatissimo in the brilliant and light versions, with a KOT of about % of the IOI (see Figure 3). The staccato in natural performances was produced with average duration about 66% of the IOI. Repeated notes articulation was also analyzed. The first note in a pair of repeated ones was performed with average duration of about 6 % of the IOI, i.e. in the range of mezzostaccato articulation. A second study focused on the analysis of staccato articulation (Bresin and Widmer, 2). In this experiment a professional Viennese pianist played 13 Mozart s piano sonatas on a Bösendorfer SE29 computer-monitored concert grand piano. The performance of notes that were marked staccato in Mozart s original score was analyzed. The large-scale data analysis of 482 notes revealed that the amount of staccato varied with melody contour, tempo Brilliant Light

indications, and context. Notes were played with larger staccato in allegro tempi than in adagio tempi. The amount of staccato for the middle note in a three-notes context varied if the preceding and/or following notes were also marked staccato. For instance, notes were played more staccato if they were followed by non-staccato tones (NSN and SSN cases in Figure 4). Even if the two studies were based on different materials, similar results were observed. It emerged that in staccato articulation the relative amount of staccato for one tone is independent of its IOI, as observed also by Repp (1998). Another important result was that articulation depends also on the melodic direction. In legato articulation, notes initiating ascending intervals were played with shorter duration and KOT than notes initiating descending intervals. In staccato articulation, notes initiating ascending intervals were played more staccato, i.e. with shorter duration, than notes initiating descending intervals. This dependence of articulation on the melodic shape is in accordance with previous findings. The <Faster uphill> rule proposed by Lars Frydén and implemented in the Director Musices (DM) program, shortens the duration of notes in ascending melodies (Friberg, 1991; Friberg, Colombo, Frydén and Sundberg, 2). Furthermore, the piano action is faster for keys corresponding to higher notes (Askenfelt and Jansson, 199; Goebl and Bresin, 21). 3 Articulation Rules for Automatic Piano Performance The results from the two studies presented above were implemented in the DM music performance grammar in terms of a new set of articulation rules for piano music performance. They operate on notes marked legato or staccato in the score, and on repeated notes. These rules are named respectively <Score legato articulation> rule, <Score staccato articulation> rule, and <Articulation of repetition> rule. A fourth rule, <Duration contrast articulation>, controls the articulation of notes that are not covered by the three previous cases, i.e. this rule can be used to add a legato or staccato articulation to any other note in the score. All these rules affect articulation in different ways according to expressive indications (such as brilliant or dark), tempo indications (such as adagio, andante, allegro, presto, and menuetto), and legato and staccato marks in the score. For a detailed description of these articulation rules see (Bresin, 21). As an example, the equation for the <Score legato articulation> rule is (see also Figure 5): 2 KOT ( k) = g( k) IOI + f ( k) IOI (1) where the variable k is an emphasis parameter used to control legato articulation; the two functions g(k) and f(k) are plotted in Figure 5; IOI is the inter-onset-interval referred to the current note; KOT(k) is the resulting keyoverlap-tone for the current note. Equation (1) can produce legato articulation effects for performances ranging from flat to passionate, and passing through natural.,12 1 5 NSN 1 5 NSS g(k),8,4 g(k) =,43k +,66 g(k) = -5E-5k +,11 1 5 SSS 1 5 SSN Figure 4. Mean KDR for staccato tones in different contexts: isolated staccato notes (NSN), staccato notes followed (NSS), surrounded (SSS) and preceded (SSN) by other staccato notes. Letters N and S are used for labeling non-staccato (N) and staccato (S) tones respectively. f(k) 2 15 1 1 2 3 4 5 6 5 f(k) = 5,8533k + 11,315 k f(k) = 1,15k + 16,63 1 2 3 4 5 6 Figure 5. Functions implementing the <Score Legato articulation> rule of equation (1). k

The changes of tone duration produced by the articulation rules are equal to or larger than the just noticeable quantity necessary for perceiving legato or staccato tones according to findings by Woody (1997). Furthermore analysis-by-synthesis experiments confirmed the importance of these rules for a qualitative improvement of computer-generated performances. In the next sections two applications of the articulation rules are presented: (1) production of emotionally expressive performances, and (2) new models for the control of sound synthesis algorithms. 4 Articulation in Emotional Expressive Music Performance In a previous section, it was shown how articulation plays an important role in expressive performance. Gabrielsson and Juslin (1996) observed that articulation is relevant to the emotional coloring of a performance: when asked to portray sadness, solemnity or tenderness players use legato articulation, while staccato or non-legato articulation is applied in happy, scared and angry renderings. This was observed already by Carl Philippe Emanuel Bach (13/1949) who wrote "...activity is expressed in general with staccato in Allegro and tenderness with portato and legato in Adagio ". IOI (%) The articulation rules presented in the previous section were included in the design of DM macro-rules for emotional coloring of performances. A macro-rule is a collection of performance rules that are applied to a score in order to obtain a complete expressive performance. A macro-rule for sadness is presented in Table 1. The effect of this macro-rule is illustrated in the version of Carl Michael Bellman s song Letter 48 presented in Figure 6. The corresponding sound excerpt is available on the RENCON 22 Proceedings CD-ROM and on the Internet (see the Links section for the URL). The articulation rules were successfully applied in macro-rules for the production of emotionally expressive performances of music scores. These performances were classified in formal listening tests as happy or sad, thus confirming hypotheses by Juslin and collaborators (Juslin, Friberg and Bresin, In press). The use of articulation rules for the rendering of different expressive performances has a correspondence with hyper- and hypo-articulation in speech. Formants, intensity and duration of vowels and duration of consonants can vary with the speaker s emotional state or the intended emotional communication (Lindblom, 199). Still, as in expressive music performance, the structure of phrases and the meaning of the speech remain unchanged. More information about emotionally colored performance can be found in a paper by Bresin and Friberg (2). DRO (%) db Figure 6. Inter-onset-interval (IOI in %), offset-to-onset duration (DRO in %) and sound level (db) deviations in the sad version of Carl Michael Bellman s song Letter 48. A positive DRO corresponds to KOT; a negative one to KDT.

Table 1. DM macro-rule description for the sad performance of Carl Michael Bellman s song Letter 48. In the first column are the important expressive cues identified by Gabrielsson and Juslin; in the second column are reported qualitative observations by them; in third column is described the DM rules setting for implementing the observations in column two. EXPRESSIVE CUE GABRIELSSON & JUSLIN (1996) DIRECTOR MUSICES RULES Tempo Slow Tone Duration is shortened by 15% Sound level Moderate or Low Sound Level is increased by 8 db Articulation Legato Score Legato Articulation rule (k = 2.7) Moderate Time deviations & Sound level deviations Soft duration contrast Relative large deviations in timing Duration Contrast rule (k = -2, amp = ) Punctuation rule (k = 2.1) Phrase Arch rule applied to three phrase levels (level 1, k = 2.7; level 2, k = 1.5; level 3, k = 1.5) High Loud rule (k = 1) 5 Novel Models for Sound Control Recently, the articulation rules described above have been applied to the design of control models for sound synthesis (Bresin, Friberg and Dahl, 21). The aim was to provide a more natural and realistic control of synthesis algorithms that typically fail to allow sufficient control and to produce a natural acoustic behavior of transitions between adjacent tones (Dannenberg and Derenyi, 1998). The starting point was results from previous research. Many investigations have shown that human locomotion is related to timing in music performance. For example, it has been demonstrated how final ritardando in Baroque music and stopping runners follow the same tempo curve (Friberg and Sundberg, 1999). Friberg and co-workers (Friberg, Sundberg and Frydén, 2) studied the relationship between music and human motion also in a direct way. Two subjects simulated tired, energetic and solemn gaits on a force platform. The vertical force patterns exerted by the foot were used as sound level envelopes for tones played at different tempi. Results from listening tests indicated that each tone, corresponding to a specific gait, could clearly be classified in terms of motion. The articulation rules have been found to cover analogies with human locomotion too. In a previous work Bresin showed how note duration in staccato and legato articulation has a correspondence to gait duration in running and in walking respectively (Bresin, 2). These analogies between locomotion and music performance resulted in the design of new control models for synthesizing walking sound patterns. It seems likely that similar sound control models, based on locomotional patterns, can be developed further. In particular a model for humanized walking and one for stopping runners have been implemented with promising results (Bresin, Friberg and Dahl, 21). These models control the timing of the real sound of one step on gravel. In these models the <Score legato articulation> rule was used for controlling the step sound of a person walking on gravel, and the <Final ritard> rule was used for controlling the step sound of a person stopping from running on gravel. The models were validated with a listening test: subjects could discriminate between walking and running sounds that also were classified according to the corresponding types of motion produced by the control models. More information about the control models and the experiment can be found in the paper by Bresin et al. (Bresin, Friberg and Dahl, 21). 6 Conclusions Articulation has indeed great importance in piano performance. Measurements of performances on MIDI grand pianos have shown that pianists vary the quality and quantity of articulation when coloring renderings of a piece according to different expressive adjectives. Generally, happy performances are characterized by staccato articulation while sad ones are played applying legato. The amount of legato articulation for one note was found to be depending of the IOI of that note. On the other hand, the amount of staccato articulation was independent of the IOI. The measurements of articulation in different expressive performances have led to the design of a set of performance rules. These rules play an important role in the automatic rendering of a score, since articulation is one of the parameters used for differentiating the expressive coloring of performances.

Finally, on the basis of strong analogies between body motion and music performance articulation rules were developed in control models for sound synthesis algorithms. This and other applications can be seen as a further indication that studies of music performance can be used also for extra-musical applications. 7 Links The art of music performance: http://www.speech.kth.se/music/performance The Director Musices program: http://www.speech.kth.se/music/performance/download/ Articulation rules and sound examples: http://www.speech.kth.se/music/performance/articulation Deadpan and sad versions of Carl Michael Bellman s song Letter 48. The <Score legato articulation> and the <Score staccato articulation> rules were applied in the sad version: http://www.speech.kth.se/music/performance/germ 8 Acknowledgments I would like to thank the organizers of RENCON 22 for inviting me and for making this paper possible. My gratitude goes to all the colleagues involved in the works mentioned in this paper. References Askenfelt, A. and E. Jansson (199). From touch to string vibrations. I: Timing in the grand piano action. Journal of Acoustical Society of America, 88(1): 52-63. Bach, C. P. E. (13/1949). Essay on the true art of playing keyboard instruments. New York, Norton. Bresin, R. (2). Virtual Virtuosity. Studies in automatic music performance. Doctoral Dissertation, Speech Music and Hearing Department. Stockholm, KTH. ISBN 91-717- 643-7 Bresin, R. (21). Articulation rules for automatic music performance. In Proceedings of International Computer Music Conference - ICMC21, Havana, International Computer Music Association, Bresin, R. and G. U. Battel (2). Articulation strategies in expressive piano performance. Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's sonata in G major (K 545). Journal of New Music Research, 29(3): 211-224. Bresin, R. and A. Friberg (2). Emotional Coloring of Computer-Controlled Music Performances. Computer Music Journal, 24(4): 44-63. Bresin, R., A. Friberg and s. Dahl (21). Toward a new model for sound control. In Proceedings of COST-G6 Conference on Digital Audio Effects - DAFx1, Limerick, Ireland, CSIS - University of Limerick, 45-49. Bresin, R. and G. Widmer (2). Production of staccato articulation in Mozart sonatas played on a grand piano. Preliminary results. Speech Music and Hearing Quarterly Progress and Status Report, 2(4): 1-6. Dannenberg, R. and I. Derenyi (1998). Combining Instrument and Performance Models for high-quality Music Synthesis. Journal of New Music Research, 27(3): 211-238. Friberg, A. (1991). Generative Rules for Music Performance: A Formal Description of a Rule System. Computer Music Journal, 15(2): 56-71. Friberg, A., V. Colombo, L. Frydén and J. Sundberg (2). Generating Musical Performances with Director Musices. Computer Music Journal, 24(3): 23-29. Friberg, A. and J. Sundberg (1999). Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. Journal of Acoustical Society of America, 15(3): 1469-1484. Friberg, A., J. Sundberg and L. Frydén (2). Music from Motion: Sound Level Envelopes of Tones Expressing Human Locomotion. Journal of New Music Research, 29(3): 199-21. Gabrielsson, A. and P. N. Juslin (1996). Emotional expression in music performance: between the performer's intention and the listener's experience. Psychology of Music, 24: 68-91. Goebl, W. and R. Bresin (21). Are computer-controlled pianos a reliable tool in music performance research? Recording and reproduction precision of a Yamaha Disklavier grand piano. In Proceedings of Workshop on Current Research Directions in Computer Music, Barcelona, Audiovisual Institute, Pompeu Fabra University, 45-5. Juslin, P. N., A. Friberg and R. Bresin (In press). Toward a computational model of expression in performance: The GERM model. Musicae Scientiae. Kato, I., Ohteru, S., Shirai, K., Narita, S., Sugano, S., Matsushima, T., Kobayashi, T., Fujisawa, E. (1987). The robot musician 'WABOT-2'. Robotics, 3(2): 143-155. Lim, H., A. Ishii and A. Takanishi (1999). Basic emotional walking using a biped humanoid robot. In Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, SMC '99, 954-959. Lindblom, B. (199). Explaining phonetic variation: a sketch of the H&H theory. Speech production and speech modeling. H. Marchal. Dordrecht, Kluwer: 43-439. Mathews, M. V. (19). How to make a slur. Journal of Acoustical Society of America, 58(S1): S132. Pandy, M. G. and F. C. Anderson (2). Dynamic Simulation of Human Movement Using Large-Scale Models of the Body. Phonetica, 57(2-4): 219-228. Repp, B. (1995). Acoustics, perception, and production of legato articulation on a digital piano. Journal of Acoustical Society of America, 97(6): 3862-3874. Repp, B. (1997). Acoustics, perception, and production of legato articulation on a computer-controlled grand piano. Journal of Acoustical Society of America, 12(3): 1878-189. Repp, B. (1998). Perception and Production of Staccato articulation on the Piano. Unpublished manuscript, Haskins Laboratories http://www.haskins.yale.edu/haskins/staff/repp.html Woody, R. H. (1997). Perceptibility of changes in piano tone articulation. Psychomusicology, 16: 12-19.