EMOTIONFACE: PROTOTYPE FACIAL EXPRESSION DISPLAY OF EMOTION IN MUSIC. Emery Schubert
|
|
- Margery Marjory Copeland
- 5 years ago
- Views:
Transcription
1 Proceedings o ICAD 04-Tenth Meeting o the International Conerence on Auditory Display, Sydney, Australia, July 6-9, 2004 EMOTIONFACE: PROTOTYPE FACIAL EXPRESSION DISPLAY OF EMOTION IN MUSIC Emery Schubert School o Music and Music Education University o New South Wales Sydney NSW 2052 AUSTRALIA E.Schubert@unsw.edu.au ABSTRACT EmotionFace is a sotware interace or visually displaying the sel-reported emotion expressed by music. Taken in reverse, it can be viewed as a acial expression whose auditory connection or exemplar is the time synchronized, associated music. The present instantiation o the sotware uses a simple schematic ace with eyes and mouth moving according to a parabolic model: Smiling and rowning o mouth represents valence (happiness and sadness) and amount o opening o eyes represents arousal. Continuous emotional responses to music collected in previous research have been used to test and calibrate EmotionFace. The interace provides an alternative to the presentation o data on a two-dimensional emotion-space, the same space used or the collection o emotional data in response to music. These synthesized acial expressions make the observation o the emotion data expressed by music easier or the human observer to process and may be a more natural interace between the human and computer. Future research will include optimization o EmotionFace, using more sophisticated algorithms and acial expression databases, and the examination o the lag structure between acial expression and musical structure. Eventually, with more elaborate systems, automation and greater knowledge o emotion and associated musical structure, it may be possible to compose music meaningully rom synthesized and real acial expressions. 1. INTRODUCTION The ability o music to express emotion is one o its most ascinating and attractive characteristics. Measuring the emotion which music can express has, consequently, occupied thinkers and researchers or a long time. One o the problems requiring consideration is how to measure emotion. There have been three broad approaches: physiological measurement (such as heart rate and skin conductance), observational measures (documenting the listeners physical postures and gestures made while listening) and cognitive sel-reporting. Physiological measures tend to tap into changes that are relective o the arousal dimension o emotion [1]. Few studies have shown that they can reliably dierentiate, or example, between happy and sad emotional responses. Observational methods are rarely ound because they are airly complex and expensive to implement. One o the most important examples o such observational methodology is in the coding o acial expressions [eg. 2], though this approach is yet to be applied to the analysis o the music listener s ace. In both o these methodologies the measurement is restricted to an emotion experienced by the listener. It seems unlikely that physiological and observational approaches could indicate the emotion the listener identiies as being in the music (or more inormation on the distinction between perceived and experienced emotion in music see [3]). The most common way o measuring emotional responses to music has been through cognitive sel-report, where the listener verbally reports the emotion perceived in the music. The sel-report approach has been subdivided into three types o response ormats: open-ended, checklist and rating scale. Typically, with each approach participants are asked to listen to a piece o music and make a response at the end o the piece. Since the 1980s researchers have had easier access to computer technology which allows emotional observations about unolding music to be tracked continuously. For this process, Schubert has argued that the best approach is to use rating scales [4]. He proposed a method o collecting sel-reported emotions by combining two rating scales on a visual display. The rating scales should be reasonably independent and explain a signiicant proportion o variation in emotional response. Several researchers have identiied the dimensions which ulill these criteria as being valence (happiness versus sadness) and arousal (activity versus sleepiness) (eg. [10]). The dimensions have been combined at right angles on a computer screen, with a mouse tracking system which is synchronised with the unolding music [5, 6]. One o the applications o tracking emotional response to music in this way is that pedagogues, researchers, musicians and listeners in general can examine the two dimensional emotion space expressed by music according to the sampled population. In the past [6], the visual interace has been the same emotion space used or collecting data rom individual participants. The present paper describes a method o displaying the emotion expressed by music using continuously synthesized acial expressions. 2. FACIAL EXPRESSION LITERATURE Since Darwin s work on emotion [7] we have had a good understanding o how acial expressions communicate emotional inormation. Humans are highly sensitive to nuances in such acial expressions (e.g. 8, 9) and there is strong evidence that the emotion communicated by acial expressions can be understood universally. This corpus o available emotional expressions in the human ace has been documented and decoded largely through the work o Ekman and Friesen [2]. Their taxonomy allows the meaningul reduction o emotion into 6 prototypical, basic emotions. These basic emotions can be translated onto a continuum using a dimensional model o emotion [10]. The eyes,
2 Proceedings o ICAD 04-Tenth Meeting o the International Conerence on Auditory Display, Sydney, Australia, July 6-9, 2004 eyebrows and mouth are the main parts o the ace which signal emotional messages, what Fasel and Luettin [11] reer to as intransient acial eatures. Further, eye shape is more important than mouth shape in activating the high arousal emotion o ear [12], and thereore has an important connection with the arousal component o emotional expression. In simple animations the valence o emotion is easy to detect through the shape o the lips (concave up or happy expression, and concave down or sad expressions). It should thereore possible to synthesize a simple, schematic ace with easily recognizable emotional expressions using appropriately shaped curves to represent eye size and mouth shape. Transorming two-dimensional emotion data (valence and arousal) into mouth shape and eye size respectively was viewed to be a logical starting point or providing synthesized, visual display o emotion which a human can understand. The next section describes an algorithm used to draw such a ace dynamically as music unolds (using already gathered subjective arousal and valence data rom a previous study using second by second median responses o 67 participants with a airly high degree o musical training and experience [13]). 3. FACIAL EXPRESSION ALGORITHM The aim o the prototype schematic EmotionFace interace was to produce a visually and algorithmically simple schematic ace able to communicate a spectrum o acial expressions along the arousal and valence dimensions. While such a model is airly simple and more sophisticated algorithms are available or manipulating acial expressions [14], the present realization extracts some o the basic principles which exist in the literature and applies them using only two parabolic unctions. One parabola represents the arousal as expressed by eye opening. First, the lower hal o one eye is calculated according to the ormula: (x) = k a (x - e /2)(x + e /2)/a (2) where a is the median o perceived arousal value (gathered in [13]) with the addition o 100 (the addition o 100 is ensure that the parabola is always concave up, because a can have negative values as large as -100). Arousal appears in the denominator because large values o a need to make the parabola narrower and, in eect, increase the eye opening size. The roots o the parabola are ixed at the horizontal eye lines and eye widths, as shown in Figure 1. The width o an eye is, thereore, set to e, with the roots o the conjugate pair being hal o e on either side o the centre. k a is a calibration constant. In the present instantiation o the interace, the author estimated all calibration constants. k a was set so that or small values o arousal, the eyes would appear to be in a neutral (partially opened) position, but or large negative (x) = k a (x - e /2)(x + e /2)/a upper _ eye (x) = - (x) mouth (x) = k v (x 2 )/v Figure 1. General anatomical/algorithmic structure o EmotionFace. The code was implemented in Hypertalk (the scripting language used in Hypercard or Macintosh). Arousal and valence data are read rom a ile which is synchronized with an audio CD track.
3 Proceedings o ICAD 04-Tenth Meeting o the International Conerence on Auditory Display, Sydney, Australia, July 6-9, 2004 values, the eyes would appear closed (or almost closed), as i sleeping. Once the lower eye is calculated within the boundary o e/2 < x < e/2, it is copied and placed in the appropriate locations based on the eye centre grids, (shown in Figure 1 as a + over each eye). The parabolas are then lipped, as indicated in the upper_eye unction in Figure 1. The mouth is represented by another parabola whose vertex is ixed at the origin according to the general orm: mouth (x) = k v (x 2 )/v (2) As positive valence, v, increases, the mouth unction deepens in a concave-up position, giving the appearance o a growing smile. When the valence becomes negative, the unction lips to concave down, giving the appearance o a rown. For the discontinuity at v = 0, the asymptotic limit is assumed, and a straight, horizontal line is displayed (i.e. neither concave up, nor concave down). k v is a constant used or calibrating the mouth shape. An additional calibration (not shown mathematically, but indicated visually in Figure 1) is the position o the x-axis, and thereore the vertex. As the length o the parabola increases or increasing values o v more space is required to draw the parabola, and to look more believable (the parabolas shown or the mouth in Figure 1 demonstrate the most extreme values o negative and positive valence, values which are rarely approached in median o subjective response to musical stimuli). Thereore, as the positive value o v rises, the x-axis is adjusted by a gradual, though small amount o lowering. Similarly, as the valence becomes more negative, the x-axis is shited upwards in small, gradual increments. The ace and eyes are drawn within a circle representing the outline o the head. The circle was placed within a square boundary o 300 by 300 pixels. From this constraint the other constants (k a and k v ) and axis positions were calculated. Valence and arousal values were synchronized with an audio-cd playing the music corresponding to the gathered emotion data. The audio-cd track time elapsed was read by an external unction written by Sudderth [151] 4. SAMPLE OUTPUTS The algorithm was applied to data rom an earlier study in which arousal and valence data were already collected [13]. The samples shown here were selected to exempliy parts o the music where extreme emotional responses occurred. The irst example (Figure 2) shows one o the lowest valence points occurring in the slow movement o Concierto de Aranjuez by Rodrigo, which occurs around the 263 rd second o the piece in the recording used. The mouth shape is a negative parabola because the valence is negative (-32 on a scale o 100 to +100), relecting the rown, and the eyes are in a roughly neutral position, though slightly closing because o the small, negative valence (-7, also on a scale o 100 to +100). Figure 2. EmotionFace display at the 263 rd second o the Aranjuez concerto, where arousal was 7 and valence was 32 (each on a 100 to +100 scale). Figure 3 shows the dynamic progression o the ace at the opening o Dvorak s Slavonic Dance No. 1 Op. 46, which commences with a loud, sustained chord. EmotionFace always commences a piece in the neutral position (approximately 0 valence and arousal: The data upon which the acial expressions were calculated or the Dvorak can be seen in Table 1). While there is known to exist some time lag between musical activity and associated emotional response [4], the startle o the loud beginning o this piece (see score in Figure 3) promptly leads EmotionFace to a wide eye opening, beore the valence o the music is noticeably altered. Ater a ew seconds, when the uriant has commenced in the major key, the valence increases, as relected in the growing, concave up, parabolic smile, most noticeably at about the 6 th second. At the sixth second there is a noticeable visual indication o a positive valence expression. Time (seconds) Arousal (-100 to +100) Valence (-100 to +100) Table 1: Sample by sample median values o continuous ratings o subjectively determined arousal and valence expressed by Dvorak s Slavonic Dance, shown in Figure 2. Rated by 67 participants in rom an earlier study [13].
4 Proceedings o ICAD 04-Tenth Meeting o the International Conerence on Auditory Display, Sydney, Australia, July 6-9, 2004 Face at time (seconds) [continued below] time elapsed (seconds) time elapsed (seconds) Time (seconds) Figure 2. EmotionFace screen shots or the irst 11 seconds o Slavonic Dance No. 1 Op 46 by Dvorak. Each ace drawn corresponding to each second o music. The second hal-dozen screen shots are shown below the musical score or ease o viewing. Musical score source: Antonin Dvorak Slavonic Dances No. 1, Op. 46, in ull score. Dover Publications, New York, (1987), pp. 1-2.
5 Proceedings o ICAD 04-Tenth Meeting o the International Conerence on Auditory Display, Sydney, Australia, July 6-9, CONCLUSION The EmotionFace interace provides an alternative, intuitive method o displaying emotion expressed by music. The approach provides another tool or examining dynamic and time dependent emotion responses to music. In some respects it provides a more meaningul display than a two dimensional plot o the arousal and valence because the human s strong ainity toward the interpretation o acial expressions. The method may have applications or pedagogues by teaching students about the kinds o emotion that music can express. On a more trivial level it could be used to accompany music on people s audio reproduction systems. I this is to occur, a database o emotional responses to many pieces o music needs to be gathered. More serious uture work needs to address the lag structure between the emotion expressed by the music and when it is noticed by the listener. For example, in the Dvorak excerpt described, there is a airly sudden increase in arousal response almost immediately (in about one or two seconds) ater the piece commenced. However, Schubert & Dunsmuir [16] demonstrated that the typical delay between music and emotion is around 3 seconds. Should the acial model relect this dynamically varying delay between causal musical eatures and emotional response, or should it be tied directly (instantaneously) to the musical eatures? Further work will also examine alternative algorithms or displaying acial expressions, or the use o a database o standardized emotional expressions. Eventually, it may be possible to extract emotional inormation directly rom the musical signal. This is most likely to occur when subjective measurements can be modeled with musical eatures alone [17], and when these musical eatures can be automatically extracted in real time. Alternatively, it may become possible to compose pieces o music based on acial expressions. With our current knowledge o the relationship between arousal and valence in both acial expression and in music, the results would most likely be quite primitive. However, in years to come, the prospect o acially produced music composition may become a viable proposition. [5] Madsen, C. K., Emotional response to music as measured by the two-dimensional CRDI, Journal o Music Therapy, 34 (1997), [6] Schubert, E., Measuring Temporal Emotional Response to Music Using the Two Dimensional Emotion Space, Proceedings o the 4th International Conerence or Music Perception and Cognition, Montreal, Canada (11-15 August) (1996), [7] Darwin, C., The Expression o the Emotions in Man and Animals, University o Chicago Press, Chicago (1965/1872). [8] Adolphs, R. et al., Cortical systems or the recognition o emotion in acial expressions Journal o. Neuroscience. 16 (1996) [9] Davidson, R. J. & Irwin, W. The unctional neuroanatomy o emotion and aective style Trends in Cognitive Sciences, 3(1) (1999), [10] Russell, J. A., Aective space is bipolar. Journal o Social Psychology, 37 (1979), [11] Fasel, B. & Luettin, J., Automatic acial expression analysis: a survey, Pattern Recognition 36 (2003), [12] Morris, J. S., de Bonis, M. & Dolan, R. J., Human Amygdala Responses to Fearul Eyes, NeuroImage, 17 (1) (September 2002), [13] Schubert, E., Measuring Emotion Continuously: Validity and Reliability o the Two Dimensional Emotion Space, Australian Journal o Psychology, 51 (1999), [14] Du, Y & Lin, X., Emotional acial expression model building, Pattern Recognition Letters, 24(16) (2003), [15] Sudderth, J. (1995). CoreCD (Version 1.4) [computer sotware]. Core Development Group, Inc. (1995). [16] Schubert, E. & Dunsmuir, W., Regression modelling continuous data in music psychology, in Suk Won Yi (Ed.), Music, Mind, and Science, Seoul National University Press (1999), pp [17] Schubert, E., Modelling emotional response with continuously varying musical eatures. Music Perception, 21(4) (2004), ACKNOWLEDGEMENT This research was supported by an Australian Research Council Grant ARC-DP I am grateul to Daniel Woo rom the School o Computer Science and Engineering at the University o New South Wales or his assistance in the preparation o this paper. 7. REFERENCES [1] Radocy, R. E. & Boyle, J. D., Psychological oundations o musical behaviour (2nd ed.), Springield, IL: Charles C. Thomas (1988). [2] Ekman, P., & Friesen, W.V., Constants across cultures in the ace and emotion, J. Personality Social Psychol. 17 (2) (1971), [3] Gabrielsson, A. Emotion perceived and emotion elt: Same or dierent? Musicae Scientiae. Spec Issue, (2002), [4] Schubert, E., Continuous Measurement o Sel-Report Emotional Response to Music, in P. Juslin and J. Sloboda (Eds.), Music and Emotion: Theory and Research, Oxord University Press, Oxord (2001), pp
Transient behaviour in the motion of the brass player s lips
Transient behaviour in the motion o the brass player s lips John Chick, Seona Bromage, Murray Campbell The University o Edinburgh, The King s Buildings, Mayield Road, Edinburgh EH9 3JZ, UK, john.chick@ed.ac.uk
More informationContinuous Response to Music using Discrete Emotion Faces
Continuous Response to Music using Discrete Emotion Faces Emery Schubert 1, Sam Ferguson 2, Natasha Farrar 1, David Taylor 1 and Gary E. McPherson 3, 1 Empirical Musicology Group, University of New South
More informationA Comparison between Continuous Categorical Emotion Responses and Stimulus Loudness Parameters
A Comparison between Continuous Categorical Emotion Responses and Stimulus Loudness Parameters Sam Ferguson, Emery Schubert, Doheon Lee, Densil Cabrera and Gary E. McPherson Creativity and Cognition Studios,
More informationElectronic Musicological Review
Electronic Musicological Review Volume IX - October 2005 home. about. editors. issues. submissions. pdf version The facial and vocal expression in singers: a cognitive feedback study for improving emotional
More informationTOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC
TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu
More informationSubjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach
Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach Sylvain Le Groux 1, Paul F.M.J. Verschure 1,2 1 SPECS, Universitat Pompeu Fabra 2 ICREA, Barcelona
More informationDesigning Filters with the AD6620 Greensboro, NC
Designing Filters with the AD66 Greensboro, NC Abstract: This paper introduces the basics o designing digital ilters or the AD66. This article assumes a basic knowledge o ilters and their construction
More informationComputer Coordination With Popular Music: A New Research Agenda 1
Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,
More informationExpressive information
Expressive information 1. Emotions 2. Laban Effort space (gestures) 3. Kinestetic space (music performance) 4. Performance worm 5. Action based metaphor 1 Motivations " In human communication, two channels
More informationTime-sharing. Service NUMERICAL CONTROL PARTS PROGRAMMING WITH REMAPT ELECTRIC. ... a new dimension in fast, accurate tape preparation.
GE Time-sharing Service World Leader in - 1 NUMERICAL CONTROL PARTS PROGRAMMING WITH REMAPT... a new dimension in ast, accurate tape preparation. GENERAL @.,' ELECTRIC i 166108 -- - KILL SLIDE My presentation
More informationMusic Emotion Recognition. Jaesung Lee. Chung-Ang University
Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or
More informationABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC
ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk
More informationExpressive performance in music: Mapping acoustic cues onto facial expressions
International Symposium on Performance Science ISBN 978-94-90306-02-1 The Author 2011, Published by the AEC All rights reserved Expressive performance in music: Mapping acoustic cues onto facial expressions
More informationSpeech Recognition and Signal Processing for Broadcast News Transcription
2.2.1 Speech Recognition and Signal Processing for Broadcast News Transcription Continued research and development of a broadcast news speech transcription system has been promoted. Universities and researchers
More informationThe ASI demonstration uses the Altera ASI MegaCore function and the Cyclone video demonstration board.
April 2006, version 2.0 Application Note Introduction A digital video broadcast asynchronous serial interace (DVB-) is a serial data transmission protocol that transports MPEG-2 packets over copper-based
More informationDesign of Control System for Kiwifruit Automatic Grading Machine
Sensors & Transducers 2013 by IFSA http://www.sensorsportal.com Design o Control System or Kiwiruit Automatic Grading Machine Xingjian Zuo, * Lijia Xu College o Inormation Engineering and Technology, Sichuan
More informationPeak experience in music: A case study between listeners and performers
Alma Mater Studiorum University of Bologna, August 22-26 2006 Peak experience in music: A case study between listeners and performers Sujin Hong College, Seoul National University. Seoul, South Korea hongsujin@hotmail.com
More informationSwept-tuned spectrum analyzer. Gianfranco Miele, Ph.D
Swept-tuned spectrum analyzer Gianranco Miele, Ph.D www.eng.docente.unicas.it/gianranco_miele g.miele@unicas.it Envelope detector Spectrum analyzers typically convert the IF signal to video with an envelope
More informationAbout Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance
Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About
More informationLiam Ranshaw. Expanded Cinema Final Project: Puzzle Room
Expanded Cinema Final Project: Puzzle Room My original vision of the final project for this class was a room, or environment, in which a viewer would feel immersed within the cinematic elements of the
More informationTHE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC
THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC Fabio Morreale, Raul Masu, Antonella De Angeli, Patrizio Fava Department of Information Engineering and Computer Science, University Of Trento, Italy
More information1. BACKGROUND AND AIMS
THE EFFECT OF TEMPO ON PERCEIVED EMOTION Stefanie Acevedo, Christopher Lettie, Greta Parnes, Andrew Schartmann Yale University, Cognition of Musical Rhythm, Virtual Lab 1. BACKGROUND AND AIMS 1.1 Introduction
More informationAMusTCL. Sample paper from Your full name (as on appointment form). Please use BLOCK CAPITALS. Centre
AMusTCL Sample paper rom 2017 Your ull name (as on appointment orm). Please use BLOCK CAPITALS. Your signature Registration number Centre INSTRUCTIONS TO CANDIDATES 1. The time allowed or answering this
More informationAffective response to a set of new musical stimuli W. Trey Hill & Jack A. Palmer Psychological Reports, 106,
Hill & Palmer (2010) 1 Affective response to a set of new musical stimuli W. Trey Hill & Jack A. Palmer Psychological Reports, 106, 581-588 2010 This is an author s copy of the manuscript published in
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationLaboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB
Laboratory Assignment 3 Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB PURPOSE In this laboratory assignment, you will use MATLAB to synthesize the audio tones that make up a well-known
More informationTHE APPLICATION OF SIGMA DELTA D/A CONVERTER IN THE SIMPLE TESTING DUAL CHANNEL DDS GENERATOR
THE APPLICATION OF SIGMA DELTA D/A CONVERTER IN THE SIMPLE TESTING DUAL CHANNEL DDS GENERATOR J. Fischer Faculty o Electrical Engineering Czech Technical University, Prague, Czech Republic Abstract: This
More informationBrain.fm Theory & Process
Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as
More informationMusic Performance Panel: NICI / MMM Position Statement
Music Performance Panel: NICI / MMM Position Statement Peter Desain, Henkjan Honing and Renee Timmers Music, Mind, Machine Group NICI, University of Nijmegen mmm@nici.kun.nl, www.nici.kun.nl/mmm In this
More informationSI-Studio environment for SI circuits design automation
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES, Vol. 60, No. 4, 2012 DOI: 10.2478/v10175-012-0087-5 ELECTRONICS SI-Studio environment for SI circuits design automation S. SZCZĘSNY, M. NAUMOWICZ,
More informationThe relationship between properties of music and elicited emotions
The relationship between properties of music and elicited emotions Agnieszka Mensfelt Institute of Computing Science Poznan University of Technology, Poland December 5, 2017 1 / 19 Outline 1 Music and
More informationTone Warmups. Preparation bar for 12 & 13. Feel free to create your own improvisations at any time; close your eyes, and sense the lips.
12 Preparation bar or 12 & 13 Discover the embouchure that makes upper note easier; move lip centers orward, keep air speed 10% than you think you need, and drop excess tension suddenly, while still holding
More informationCommunity Choirs in Australia
Introduction The Music in Communities Network s research agenda includes filling some statistical gaps in our understanding of the community music sector. We know that there are an enormous number of community-based
More informationThe Convergence of Schenkerian Music Theory and Generative Linguistics: An Analysis and Composition
College o the Holy Cross CrossWorks Honors Theses Honors Projects 4-2017 The Convergence o Schenkerian Music Theory and Generative Linguistics: An Analysis and Composition Michael A. Ciaramella College
More informationDeep Neural Networks Scanning for patterns (aka convolutional networks) Bhiksha Raj
Deep Neural Networks Scanning for patterns (aka convolutional networks) Bhiksha Raj 1 Story so far MLPs are universal function approximators Boolean functions, classifiers, and regressions MLPs can be
More informationEnvironment Expression: Expressing Emotions through Cameras, Lights and Music
Environment Expression: Expressing Emotions through Cameras, Lights and Music Celso de Melo, Ana Paiva IST-Technical University of Lisbon and INESC-ID Avenida Prof. Cavaco Silva Taguspark 2780-990 Porto
More informationAPPLICATION OF PHASED ARRAY ULTRASONIC TEST EQUIPMENT TO THE QUALIFICATION OF RAILWAY COMPONENTS
APPLICATION OF PHASED ARRAY ULTRASONIC TEST EQUIPMENT TO THE QUALIFICATION OF RAILWAY COMPONENTS K C Arcus J Cookson P J Mutton SUMMARY Phased array ultrasonic testing is becoming common in a wide range
More informationLigeti. Continuum for Harpsichord (1968) F.P. Sharma and Glen Halls All Rights Reserved
Ligeti. Continuum for Harpsichord (1968) F.P. Sharma and Glen Halls All Rights Reserved Continuum is one of the most balanced and self contained works in the twentieth century repertory. All of the parameters
More informationFrom quantitative empirï to musical performology: Experience in performance measurements and analyses
International Symposium on Performance Science ISBN 978-90-9022484-8 The Author 2007, Published by the AEC All rights reserved From quantitative empirï to musical performology: Experience in performance
More informationContinuous Self-report of Engagement to Live Solo Marimba Performance
Proceedings of the 10 th International Conference on Music Perception and Cognition (ICMPC10). Sapporo, Japan. Ken ichi Miyazaki, Yuzuru Hiraga, Mayumi Adachi, Yoshitaka Nakajima, and Minoru Tsuzaki (Editors)
More informationDIFFERENTIATE SOMETHING AT THE VERY BEGINNING THE COURSE I'LL ADD YOU QUESTIONS USING THEM. BUT PARTICULAR QUESTIONS AS YOU'LL SEE
1 MATH 16A LECTURE. OCTOBER 28, 2008. PROFESSOR: SO LET ME START WITH SOMETHING I'M SURE YOU ALL WANT TO HEAR ABOUT WHICH IS THE MIDTERM. THE NEXT MIDTERM. IT'S COMING UP, NOT THIS WEEK BUT THE NEXT WEEK.
More informationSemi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis
Semi-automated extraction of expressive performance information from acoustic recordings of piano music Andrew Earis Outline Parameters of expressive piano performance Scientific techniques: Fourier transform
More informationSearching for the Universal Subconscious Study on music and emotion
Searching for the Universal Subconscious Study on music and emotion Antti Seppä Master s Thesis Music, Mind and Technology Department of Music April 4, 2010 University of Jyväskylä UNIVERSITY OF JYVÄSKYLÄ
More informationEMPLOYMENT SERVICE. Professional Service Editorial Board Journal of Audiology & Otology. Journal of Music and Human Behavior
Kyung Myun Lee, Ph.D. Curriculum Vitae Assistant Professor School of Humanities and Social Sciences KAIST South Korea Korea Advanced Institute of Science and Technology Daehak-ro 291 Yuseong, Daejeon,
More informationChapter. Arts Education
Chapter 8 205 206 Chapter 8 These subjects enable students to express their own reality and vision of the world and they help them to communicate their inner images through the creation and interpretation
More informationTHE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS
THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS Anemone G. W. Van Zijl, Geoff Luck Department of Music, University of Jyväskylä, Finland Anemone.vanzijl@jyu.fi Abstract Very
More informationIMPROVING SIGNAL DETECTION IN SOFTWARE-BASED FACIAL EXPRESSION ANALYSIS
WORKING PAPER SERIES IMPROVING SIGNAL DETECTION IN SOFTWARE-BASED FACIAL EXPRESSION ANALYSIS Matthias Unfried, Markus Iwanczok WORKING PAPER /// NO. 1 / 216 Copyright 216 by Matthias Unfried, Markus Iwanczok
More informationOpening musical creativity to non-musicians
Opening musical creativity to non-musicians Fabio Morreale Experiential Music Lab Department of Information Engineering and Computer Science University of Trento, Italy Abstract. This paper gives an overview
More informationThe Human Features of Music.
The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,
More informationAN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY
AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT
More information6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016
6.UAP Project FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System Daryl Neubieser May 12, 2016 Abstract: This paper describes my implementation of a variable-speed accompaniment system that
More informationCZT vs FFT: Flexibility vs Speed. Abstract
CZT vs FFT: Flexibility vs Speed Abstract Bluestein s Fast Fourier Transform (FFT), commonly called the Chirp-Z Transform (CZT), is a little-known algorithm that offers engineers a high-resolution FFT
More informationMELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC
MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC Lena Quinto, William Forde Thompson, Felicity Louise Keating Psychology, Macquarie University, Australia lena.quinto@mq.edu.au Abstract Many
More informationinter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE
Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND
More informationReducing False Positives in Video Shot Detection
Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran
More informationPerceptual dimensions of short audio clips and corresponding timbre features
Perceptual dimensions of short audio clips and corresponding timbre features Jason Musil, Budr El-Nusairi, Daniel Müllensiefen Department of Psychology, Goldsmiths, University of London Question How do
More informationThe Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng
The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,
More informationPHGN 480 Laser Physics Lab 4: HeNe resonator mode properties 1. Observation of higher-order modes:
PHGN 480 Laser Physics Lab 4: HeNe resonator mode properties Due Thursday, 2 Nov 2017 For this lab, you will explore the properties of the working HeNe laser. 1. Observation of higher-order modes: Realign
More informationSentiment Extraction in Music
Sentiment Extraction in Music Haruhiro KATAVOSE, Hasakazu HAl and Sei ji NOKUCH Department of Control Engineering Faculty of Engineering Science Osaka University, Toyonaka, Osaka, 560, JAPAN Abstract This
More informationOff-line Handwriting Recognition by Recurrent Error Propagation Networks
Off-line Handwriting Recognition by Recurrent Error Propagation Networks A.W.Senior* F.Fallside Cambridge University Engineering Department Trumpington Street, Cambridge, CB2 1PZ. Abstract Recent years
More informationHowever, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene
Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.
More informationEVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS
EVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS ANDRÉS GÓMEZ DE SILVA GARZA AND MARY LOU MAHER Key Centre of Design Computing Department of Architectural and Design Science University of
More informationMachine-learning and R in plastic surgery Classification and attractiveness of facial emotions
Machine-learning and R in plastic surgery Classification and attractiveness of facial emotions satrday Belgrade Lubomír Štěpánek 1, 2 Pavel Kasal 2 Jan Měšťák 3 1 Institute of Biophysics and Informatics
More informationWelcome to My Favorite Human Behavior Hack
Welcome to My Favorite Human Behavior Hack Are you ready to watch the world in HD? Reading someone s face is a complex skill that needs to be practiced, honed and perfected. Luckily, I have created this
More informationQuarterly Progress and Status Report. Musicians and nonmusicians sensitivity to differences in music performance
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Musicians and nonmusicians sensitivity to differences in music performance Sundberg, J. and Friberg, A. and Frydén, L. journal:
More informationVivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.
VivoSense User Manual Galvanic Skin Response (GSR) Analysis VivoSense Version 3.1 VivoSense, Inc. Newport Beach, CA, USA Tel. (858) 876-8486, Fax. (248) 692-0980 Email: info@vivosense.com; Web: www.vivosense.com
More informationMultidimensional analysis of interdependence in a string quartet
International Symposium on Performance Science The Author 2013 ISBN tbc All rights reserved Multidimensional analysis of interdependence in a string quartet Panos Papiotis 1, Marco Marchini 1, and Esteban
More informationLeroy Anderson Went to Harvard
Jason Freeman Leroy Anderson Went to Harvard or percussion quartet About the music Leroy Anderson (198-1975) went to Harvard, where he studied composition with Walter Piston Ater being repeatedly turned
More informationINFLUENCE OF MUSICAL CONTEXT ON THE PERCEPTION OF EMOTIONAL EXPRESSION OF MUSIC
INFLUENCE OF MUSICAL CONTEXT ON THE PERCEPTION OF EMOTIONAL EXPRESSION OF MUSIC Michal Zagrodzki Interdepartmental Chair of Music Psychology, Fryderyk Chopin University of Music, Warsaw, Poland mzagrodzki@chopin.edu.pl
More informationIntelligent Music Systems in Music Therapy
Music Therapy Today Vol. V (5) November 2004 Intelligent Music Systems in Music Therapy Erkkilä, J., Lartillot, O., Luck, G., Riikkilä, K., Toiviainen, P. {jerkkila, lartillo, luck, katariik, ptoiviai}@campus.jyu.fi
More informationCHAPTER 6 DESIGN OF HIGH SPEED COUNTER USING PIPELINING
149 CHAPTER 6 DESIGN OF HIGH SPEED COUNTER USING PIPELINING 6.1 INTRODUCTION Counters act as important building blocks of fast arithmetic circuits used for frequency division, shifting operation, digital
More informationPiotr KLECZKOWSKI, Magdalena PLEWA, Grzegorz PYDA
ARCHIVES OF ACOUSTICS 33, 4 (Supplement), 147 152 (2008) LOCALIZATION OF A SOUND SOURCE IN DOUBLE MS RECORDINGS Piotr KLECZKOWSKI, Magdalena PLEWA, Grzegorz PYDA AGH University od Science and Technology
More informationWhite Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?
White Paper Uniform Luminance Technology What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved? Tom Kimpe Manager Technology & Innovation Group Barco Medical Imaging
More informationEffects of Using Graphic Notations. on Creativity in Composing Music. by Australian Secondary School Students. Myung-sook Auh
Effects of Using Graphic Notations on Creativity in Composing Music by Australian Secondary School Students Myung-sook Auh Centre for Research and Education in the Arts University of Technology, Sydney
More informationLian Loke and Toni Robertson (eds) ISBN:
The Body in Design Workshop at OZCHI 2011 Design, Culture and Interaction, The Australasian Computer Human Interaction Conference, November 28th, Canberra, Australia Lian Loke and Toni Robertson (eds)
More informationCase Study: Can Video Quality Testing be Scripted?
1566 La Pradera Dr Campbell, CA 95008 www.videoclarity.com 408-379-6952 Case Study: Can Video Quality Testing be Scripted? Bill Reckwerdt, CTO Video Clarity, Inc. Version 1.0 A Video Clarity Case Study
More informationPracticum 3, Fall 2010
A. F. Miller 2010 T1 Measurement 1 Practicum 3, Fall 2010 Measuring the longitudinal relaxation time: T1. Strychnine, dissolved CDCl3 The T1 is the characteristic time of relaxation of Z magnetization
More informationLab 6: Edge Detection in Image and Video
http://www.comm.utoronto.ca/~dkundur/course/real-time-digital-signal-processing/ Page 1 of 1 Lab 6: Edge Detection in Image and Video Professor Deepa Kundur Objectives of this Lab This lab introduces students
More informationWhat is Statistics? 13.1 What is Statistics? Statistics
13.1 What is Statistics? What is Statistics? The collection of all outcomes, responses, measurements, or counts that are of interest. A portion or subset of the population. Statistics Is the science of
More informationEmotions perceived and emotions experienced in response to computer-generated music
Emotions perceived and emotions experienced in response to computer-generated music Maciej Komosinski Agnieszka Mensfelt Institute of Computing Science Poznan University of Technology Piotrowo 2, 60-965
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationAssessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation.
Title of Unit: Choral Concert Performance Preparation Repertoire: Simple Gifts (Shaker Song). Adapted by Aaron Copland, Transcribed for Chorus by Irving Fine. Boosey & Hawkes, 1952. Level: NYSSMA Level
More informationCSC475 Music Information Retrieval
CSC475 Music Information Retrieval Monophonic pitch extraction George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 32 Table of Contents I 1 Motivation and Terminology 2 Psychacoustics 3 F0
More informationTV Synchronism Generation with PIC Microcontroller
TV Synchronism Generation with PIC Microcontroller With the widespread conversion of the TV transmission and coding standards, from the early analog (NTSC, PAL, SECAM) systems to the modern digital formats
More informationHybrid active noise barrier with sound masking
Hybrid active noise barrier with sound masking Xun WANG ; Yosuke KOBA ; Satoshi ISHIKAWA ; Shinya KIJIMOTO, Kyushu University, Japan ABSTRACT In this paper, a hybrid active noise barrier (ANB) with sound
More informationCommentary on the Arranging Process of the Octet in G Minor
Grand Valley State University ScholarWorks@GVSU Honors Projects Undergraduate Research and Creative Practice 2012 Commentary on the Arranging Process o the Octet in G Minor Mikay McKibbin Grand Valley
More informationWesleyan University AGAINST CONTEXT: HYBRIDITY AS A MEANS TO REDUCE ITS IMPACT.
Wesleyan University AGAINST CONTEXT: HYBRIDITY AS A MEANS TO REDUCE ITS IMPACT. By Tomasz Arnold Faculty Advisor: Ronald J Kuivila Readers: Paula Matthusen and Kate Galloway A Thesis submitted to Faculty
More informationAutomatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines
Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines Cyril Laurier, Perfecto Herrera Music Technology Group Universitat Pompeu Fabra Barcelona, Spain {cyril.laurier,perfecto.herrera}@upf.edu
More informationRelation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck
Relation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck E. Geissner a and E. Parizet b a Laboratoire Vibrations Acoustique - INSA
More informationA Framework for Segmentation of Interview Videos
A Framework for Segmentation of Interview Videos Omar Javed, Sohaib Khan, Zeeshan Rasheed, Mubarak Shah Computer Vision Lab School of Electrical Engineering and Computer Science University of Central Florida
More informationBlueline, Linefree, Accuracy Ratio, & Moving Absolute Mean Ratio Charts
INTRODUCTION This instruction manual describes for users of the Excel Standard Celeration Template(s) the features of each page or worksheet in the template, allowing the user to set up and generate charts
More informationMUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC
12th International Society for Music Information Retrieval Conference (ISMIR 2011) MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC Sam Davies, Penelope Allen, Mark
More informationfor Digital IC's Design-for-Test and Embedded Core Systems Alfred L. Crouch Prentice Hall PTR Upper Saddle River, NJ
Design-for-Test for Digital IC's and Embedded Core Systems Alfred L. Crouch Prentice Hall PTR Upper Saddle River, NJ 07458 www.phptr.com ISBN D-13-DflMfla7-l : Ml H Contents Preface Acknowledgments Introduction
More informationEmotional Remapping of Music to Facial Animation
Preprint for ACM Siggraph 06 Video Game Symposium Proceedings, Boston, 2006 Emotional Remapping of Music to Facial Animation Steve DiPaola Simon Fraser University steve@dipaola.org Ali Arya Carleton University
More informationUNIVERSITY OF SOUTH ALABAMA PSYCHOLOGY
UNIVERSITY OF SOUTH ALABAMA PSYCHOLOGY 1 Psychology PSY 120 Introduction to Psychology 3 cr A survey of the basic theories, concepts, principles, and research findings in the field of Psychology. Core
More informationSound visualization through a swarm of fireflies
Sound visualization through a swarm of fireflies Ana Rodrigues, Penousal Machado, Pedro Martins, and Amílcar Cardoso CISUC, Deparment of Informatics Engineering, University of Coimbra, Coimbra, Portugal
More informationA prototype system for rule-based expressive modifications of audio recordings
International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications
More informationSHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS
SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS Areti Andreopoulou Music and Audio Research Laboratory New York University, New York, USA aa1510@nyu.edu Morwaread Farbood
More informationThe Tone Height of Multiharmonic Sounds. Introduction
Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,
More informationMachine Learning Term Project Write-up Creating Models of Performers of Chopin Mazurkas
Machine Learning Term Project Write-up Creating Models of Performers of Chopin Mazurkas Marcello Herreshoff In collaboration with Craig Sapp (craig@ccrma.stanford.edu) 1 Motivation We want to generative
More information