Proceedings of Meetings on Acoustics
|
|
- Ronald Wells
- 5 years ago
- Views:
Transcription
1 Proceedings of Meetings on Acoustics Volume 19, ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Musical Acoustics Session 3pMU: Perception and Orchestration Practice 3pMU4. Predicting blend between orchestral timbres using generalized spectralenvelope descriptions Sven-Amin Lembke*, Eugene Narmour and Stephen McAdams *Corresponding author's address: Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Schulich School of Music, McGill University, Montréal, H3A 1E3, Québec, Canada, Composers rely on implicit knowledge of instrument timbres to achieve certain effects in orchestration. In the context of perceptual blending between orchestral timbres, holistic acoustical descriptions of instrument-specific traits can assist in the selection of suitable instrument combinations. The chosen mode of description utilizes spectral-envelope estimates that are acquired as pitch-invariant descriptions of instruments at different dynamic markings. Prominent local spectral-envelope traits, such as spectral maxima or formants, have been shown to influence timbre blending, involving frequency relationships between local spectral features, their prominence as formants, and constraints imposed by the human auditory system. We present computational approaches to predict timbre blend that are based on these factors and explain around 85% of the variance in behavioral timbre-blend data. Multiple linear regression is employed in modeling a range of behavioral data acquired in different experimental investigations. These include parametric investigations of formant frequency and magnitude relationships as well as arbitrary combinations of recorded instrument audio samples in dyads or triads. The cataloguing of generalized acoustical descriptions of instruments and associated timbre-blend predictions for various instrument combinations could serve as a valuable aid to orchestration practice in the future. Published by the Acoustical Society of America through the American Institute of Physics 2013 Acoustical Society of America [DOI: / ] Received 28 Jan 2013; published 2 Jun 2013 Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 1
2 INTRODUCTION When orchestrators seek blended timbres of simultaneously sounding instruments, they rely on either experimentation, prior experience, or examples from the musical repertoire. Moreover, suitable instrument combinations for blend are widely discussed in orchestration treatises, which are themselves based on subjective observations made by their authors [1, 2, 3]. Therefore, an acoustical description of instrument-specific traits across extended pitch ranges could present a valuable tool to orchestrators in allowing objective predictions of blend between arbitrary instrument combinations. Important perceptual cues for blend are known to be based on note-onset synchrony, partial-tone harmonicity, and spectral features [4, 5]. The first two factors mainly involve requirements that have to be fulfilled by the musical composition itself and also demand its precise execution during musical performance. In contrast, an orchestrator s choice of blending instruments is more likely motivated by spectral features of particular instruments. Spectral-Envelope Description Spectral-envelope representations aid the identification and description of prominent spectral features which characterize individual instruments and could serve as instrument-specific traits. In the context of orchestral wind instruments, previous studies have suggested the perceptual relevance of pitch-invariant spectral traits that characterize their timbre. The existence of stable local spectral maxima across a wide pitch range has been reported for these instruments [6, 7], which are also termed formants by analogy with the human voice. Furthermore, frequency alignment of formants between instruments has been argued to contribute to the percept of blend [8]. Certain aspects of this hypothesis have been replicated in perceptual investigations, showing that relative frequency location and magnitude difference of main formants are critical to blend [9]. Pitch-invariant spectral traits such as formants can be identified through an empirical spectral-envelope estimation method. Spectral envelopes are estimated by applying a curve-fitting procedure to composite distributions of partial tones compiled across the entire pitch range of instruments. Figure 1 shows such an estimate for the bassoon, exhibiting a prominent main formant at 500 Hz. power spectral density in db spectral envelope estimate composite partial tone distribution frequency in Hz FIGURE 1: Empirical spectral-envelope estimate for bassoon at mf, derived from a composite distribution of partial tones across the instrument s entire pitch range. In most cases, spectral-envelope shape varies as a function of the dynamic marking, and as a result, spectral-envelope descriptions should be assessed separately for different dynamics. However, as shown in Figure 2, the frequency location and shape of the main formant of the bassoon still appears to be quite robust to changes in dynamics. This points to a potential utility of main formants serving as stable perceptual cues, which remain largely unaffected by musical Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 2
3 performance. In summary, it can be reasonably assumed that such generalized, instrument-specific spectral-envelope descriptions could represent reliable predictors of blend. FIGURE 2: Temporal spectral-envelope evolution of bassoon playing crescendo-descrescendo on pitch G3, computed with True-Envelope estimation [10]. MODELLING TIMBRE BLEND BASED ON SPECTRAL FEATURES Computational models may be used as objective tools to predict timbre blend between arbitrary instrument combinations. Linear correlation can be employed to associate behavioral blend measures with single acoustical features [5, 11], without, however, assessing the possibility of a combination of descriptors to model the behavioral data. Modelling the data on multiple descriptor variables would furthermore assess the relative contributions of different acoustical features to blend. Past attempts utilizing stepwise-regression models have succeeded in explaining up to 63% of the variance in behavioral blend ratings [5]. This investigation considers the multivariate option by employing linear multiple regression, utilizing a stepwise iteration scheme. A number of spectral-envelope features are tested as potential regressors. These comprise global descriptors of spectral-envelope traits, such as spectral centroid and spectral slope, as well as local spectral-envelope descriptors characterizing formant frequency location and magnitude. For example, the formant descriptors include the frequency at which the formant maximum is located as well as frequency bounds below and above the maximum at which the magnitude has decreased by 3 db or 6 db. Modelled Data Sets In order to attain a greater generalizability, timbre-blend predictions are assessed across three independent data sets of behavioral blend ratings, denoted A, B and C. The three sets differ with regard to the behavioral rating methods and the utilized stimuli (see Table 1). Set B involves a single rating per experimental trial, employing the entire range of the rating scale on a global level, i.e., across all trials. In contrast, sets A and C are based on trials involving multiple stimuli and a corresponding number of ratings, with participants asked to employ the entire scale range on a local level, i.e., relative to the stimuli presented within a given trial. In addition, set A stems from dyads between synthesized analogues of particular instruments and their audio-sample counterparts, whereas set B and C are all based on arbitrary combinations between sampled instruments. Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 3
4 TABLE 1: Behavioral data sets used for regression models and their differences in rating method and stimulus type. Set Rating scale Rated stimuli per trial Stimuli A local 4 dyads, wind instruments B global 1 dyads, wind instruments C local 20 triads, wind and string instruments Preliminary Results Only the models for data set A have been explored; as the data sets B and C are still in the process of being acquired. Therefore, only preliminary results can be reported at this stage. Data set A originates from a parametric investigation of relative frequency and magnitude relationships between the main formants of a variable, synthesized sound and its sampled counterpart [9]. The investigated instruments are flute, oboe, B clarinet, bassoon, C trumpet and French horn. Identical multiple regression solutions are obtained for two instrument subsets, based on a total of 176 cases. Both models explain around 85% of the variance in data set A [instrument subset 1: R 2 ad j =.86, F(3,116) = , p <.0001; subset 2: R2 =.86, F(3,52) = , ad j p <.0001]. The models rely on two spectral regressors: 1) a formant-based descriptor relating spectral-envelope magnitude differences between the upper 3 db frequency bound, and 2) the absolute difference in spectral centroid. Notably, the local spectral-envelope descriptor makes a much stronger contribution than the global descriptor based on the spectral centroid, with the standardized beta coefficients for the former being about five times larger. CONCLUSION Generalized, instrument-specific spectral-envelope descriptions can be shown to predict behavioral timbre blend to a promisingly high degree. The exploration of regression models on the remaining data sets will clarify the preliminary trends as well as expand the blend-prediction scenarios to arbitrary instrument combinations in dyads and triads, in the latter case even involving string instruments. The joint evaluation of prediction models for all three data sets will allow more generalizable prediction approaches based on spectral features to be derived. This will also involve the consideration of auditory-model representations. The final aim of the prediction models will, it is hoped, make a significant contribution to establishing a generalized perceptual theory of blend with respect to spectral features. At the same time, it will motivate the cataloguing of holistic acoustical descriptions of instruments that allow timbre-blend predictions for arbitrary instrument combinations to be made. This will serve as a valuable aid to orchestration practice. ACKNOWLEDGMENTS The authors would like to thank Bennett Smith for his assistance in the setup of perceptual testing hardware in general and for programming the software interfaces to acquire the behavioral data sets B and C. We would also like to thank Kyra Parker and Emma Kast for their assistance in running the behavioral experiments leading to data sets B and C. This work was supported by a Schulich School of Music scholarship to SAL and grants from the Canadian Natural Sciences and Engineering Research Council and the Canada Research Chairs program to SM. Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 4
5 REFERENCES [1] N. Rimsky-Korsakov, Principles of orchestration (Dover Publications, New York) (1964). [2] C. Koechlin, Traité de l orchestration : en quatre volumes (M. Eschig, Paris) (1954). [3] C. Reuter, Klangfarbe und Instrumentation: Geschichte - Ursachen - Wirkung, Systemische Musikwissenschaft (Peter Lang, Frankfurt am Main) (2002). [4] G. J. Sandell, Concurrent timbres in orchestration: a perceptual study of factors determining blend (Northwestern University) (1991). [5] G. J. Sandell, Roles for Spectral Centroid and Other Factors in Determining Blended" Instrument Pairings in Orchestration, Music Perception 13, (1995). [6] K. E. Schumann, Physik der Klangfarben - Vol. 2, professorial dissertation, Universität Berlin, Berlin (1929). [7] D. Luce and J. Clark, Physical Correlates of Brass-Instrument Tones, The Journal of the Acoustical Society of America 42, (1967). [8] C. Reuter, Die auditive Diskrimination von Orchesterinstrumenten - Verschmelzung und Heraushörbarkeit von Instrumentalklangfarben im Ensemblespiel (Peter Lang, Frankfurt am Main) (1996). [9] S.-A. Lembke and S. McAdams, Timbre blending of wind instruments : acoustics and perception, in Proc. 5th International Conference of Students of Systematic Musicology / SysMus12, 1 5 (Montreal, Canada) (2012). [10] F. Villavicencio, A. Röbel, and X. Rodet, Improving LPC Spectral Envelope Extraction Of Voiced Speech By True-Envelope Estimation, in 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, I 869 -I 872 (2006). [11] D. Tardieu and S. McAdams, Perception of dyads of impulsive and sustained instrument sounds, Music Perception 30, (2012). Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 5
Timbre blending of wind instruments: acoustics and perception
Timbre blending of wind instruments: acoustics and perception Sven-Amin Lembke CIRMMT / Music Technology Schulich School of Music, McGill University sven-amin.lembke@mail.mcgill.ca ABSTRACT The acoustical
More informationInstrument Recognition in Polyphonic Mixtures Using Spectral Envelopes
Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu
More informationDAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes
DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms
More informationWE ADDRESS the development of a novel computational
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 663 Dynamic Spectral Envelope Modeling for Timbre Analysis of Musical Instrument Sounds Juan José Burred, Member,
More informationDERIVING A TIMBRE SPACE FOR THREE TYPES OF COMPLEX TONES VARYING IN SPECTRAL ROLL-OFF
DERIVING A TIMBRE SPACE FOR THREE TYPES OF COMPLEX TONES VARYING IN SPECTRAL ROLL-OFF William L. Martens 1, Mark Bassett 2 and Ella Manor 3 Faculty of Architecture, Design and Planning University of Sydney,
More informationAudio Feature Extraction for Corpus Analysis
Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends
More informationReceived 27 July ; Perturbations of Synthetic Orchestral Wind-Instrument
Received 27 July 1966 6.9; 4.15 Perturbations of Synthetic Orchestral Wind-Instrument Tones WILLIAM STRONG* Air Force Cambridge Research Laboratories, Bedford, Massachusetts 01730 MELVILLE CLARK, JR. Melville
More informationPOLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING
POLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING Luis Gustavo Martins Telecommunications and Multimedia Unit INESC Porto Porto, Portugal lmartins@inescporto.pt Juan José Burred Communication
More informationModeling sound quality from psychoacoustic measures
Modeling sound quality from psychoacoustic measures Lena SCHELL-MAJOOR 1 ; Jan RENNIES 2 ; Stephan D. EWERT 3 ; Birger KOLLMEIER 4 1,2,4 Fraunhofer IDMT, Hör-, Sprach- und Audiotechnologie & Cluster of
More informationPerceptual Processes in Orchestration to appear in The Oxford Handbook of Timbre, eds. Emily I. Dolan and Alexander Rehding
Goodchild & McAdams 1 Perceptual Processes in Orchestration to appear in The Oxford Handbook of Timbre, eds. Emily I. Dolan and Alexander Rehding Meghan Goodchild & Stephen McAdams, Schulich School of
More informationMeasurement of overtone frequencies of a toy piano and perception of its pitch
Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,
More informationTopic 10. Multi-pitch Analysis
Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds
More informationEFFECT OF TIMBRE ON MELODY RECOGNITION IN THREE-VOICE COUNTERPOINT MUSIC
EFFECT OF TIMBRE ON MELODY RECOGNITION IN THREE-VOICE COUNTERPOINT MUSIC Song Hui Chon, Kevin Schwartzbach, Bennett Smith, Stephen McAdams CIRMMT (Centre for Interdisciplinary Research in Music Media and
More informationA Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer
A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer Rob Toulson Anglia Ruskin University, Cambridge Conference 8-10 September 2006 Edinburgh University Summary Three
More informationTYING SEMANTIC LABELS TO COMPUTATIONAL DESCRIPTORS OF SIMILAR TIMBRES
TYING SEMANTIC LABELS TO COMPUTATIONAL DESCRIPTORS OF SIMILAR TIMBRES Rosemary A. Fitzgerald Department of Music Lancaster University, Lancaster, LA1 4YW, UK r.a.fitzgerald@lancaster.ac.uk ABSTRACT This
More informationApplication Of Missing Feature Theory To The Recognition Of Musical Instruments In Polyphonic Audio
Application Of Missing Feature Theory To The Recognition Of Musical Instruments In Polyphonic Audio Jana Eggink and Guy J. Brown Department of Computer Science, University of Sheffield Regent Court, 11
More informationLOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU
The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AN HMM BASED INVESTIGATION OF DIFFERENCES BETWEEN MUSICAL INSTRUMENTS OF THE SAME TYPE PACS: 43.75.-z Eichner, Matthias; Wolff, Matthias;
More informationWhen timbre blends musically: perception and acoustics underlying orchestration and performance
When timbre blends musically: perception and acoustics underlying orchestration and performance Sven-Amin Lembke Music Technology Area, Department of Music Research Schulich School of Music, McGill University
More informationVocal-tract Influence in Trombone Performance
Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2, Sydney and Katoomba, Australia Vocal-tract Influence in Trombone
More informationInfluence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas
Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination
More informationHowever, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene
Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.
More informationOur Perceptions of Music: Why Does the Theme from Jaws Sound Like a Big Scary Shark?
# 26 Our Perceptions of Music: Why Does the Theme from Jaws Sound Like a Big Scary Shark? Dr. Bob Duke & Dr. Eugenia Costa-Giomi October 24, 2003 Produced by and for Hot Science - Cool Talks by the Environmental
More informationModeling memory for melodies
Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University
More informationTopics in Computer Music Instrument Identification. Ioanna Karydi
Topics in Computer Music Instrument Identification Ioanna Karydi Presentation overview What is instrument identification? Sound attributes & Timbre Human performance The ideal algorithm Selected approaches
More informationThe Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng
The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,
More informationTemporal summation of loudness as a function of frequency and temporal pattern
The 33 rd International Congress and Exposition on Noise Control Engineering Temporal summation of loudness as a function of frequency and temporal pattern I. Boullet a, J. Marozeau b and S. Meunier c
More informationAUD 6306 Speech Science
AUD 3 Speech Science Dr. Peter Assmann Spring semester 2 Role of Pitch Information Pitch contour is the primary cue for tone recognition Tonal languages rely on pitch level and differences to convey lexical
More informationAN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY
AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT
More informationGCT535- Sound Technology for Multimedia Timbre Analysis. Graduate School of Culture Technology KAIST Juhan Nam
GCT535- Sound Technology for Multimedia Timbre Analysis Graduate School of Culture Technology KAIST Juhan Nam 1 Outlines Timbre Analysis Definition of Timbre Timbre Features Zero-crossing rate Spectral
More informationPerceptual dimensions of short audio clips and corresponding timbre features
Perceptual dimensions of short audio clips and corresponding timbre features Jason Musil, Budr El-Nusairi, Daniel Müllensiefen Department of Psychology, Goldsmiths, University of London Question How do
More informationOrchestration holds a special place in music. Perception of Dyads of Impulsive and Sustained Instrument Sounds
Perception of Impulsive/Sustained Dyads 117 Perception of Dyads of Impulsive and Sustained Instrument Sounds Damien Tardieu IRCAM-STMS-CNRS, Paris, France Stephen McAdams McGill University, Montréal, Canada
More informationMusical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons
Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons Róisín Loughran roisin.loughran@ul.ie Jacqueline Walker jacqueline.walker@ul.ie Michael O Neill University
More informationA METHOD OF MORPHING SPECTRAL ENVELOPES OF THE SINGING VOICE FOR USE WITH BACKING VOCALS
A METHOD OF MORPHING SPECTRAL ENVELOPES OF THE SINGING VOICE FOR USE WITH BACKING VOCALS Matthew Roddy Dept. of Computer Science and Information Systems, University of Limerick, Ireland Jacqueline Walker
More informationTOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION
TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION Jordan Hochenbaum 1,2 New Zealand School of Music 1 PO Box 2332 Wellington 6140, New Zealand hochenjord@myvuw.ac.nz
More informationLoudspeakers and headphones: The effects of playback systems on listening test subjects
Loudspeakers and headphones: The effects of playback systems on listening test subjects Richard L. King, Brett Leonard, and Grzegorz Sikora Citation: Proc. Mtgs. Acoust. 19, 035035 (2013); View online:
More informationExperiments on musical instrument separation using multiplecause
Experiments on musical instrument separation using multiplecause models J Klingseisen and M D Plumbley* Department of Electronic Engineering King's College London * - Corresponding Author - mark.plumbley@kcl.ac.uk
More informationInfluence of tonal context and timbral variation on perception of pitch
Perception & Psychophysics 2002, 64 (2), 198-207 Influence of tonal context and timbral variation on perception of pitch CATHERINE M. WARRIER and ROBERT J. ZATORRE McGill University and Montreal Neurological
More informationThe quality of potato chip sounds and crispness impression
PROCEEDINGS of the 22 nd International Congress on Acoustics Product Quality and Multimodal Interaction: Paper ICA2016-558 The quality of potato chip sounds and crispness impression M. Ercan Altinsoy Chair
More informationAcoustic and musical foundations of the speech/song illusion
Acoustic and musical foundations of the speech/song illusion Adam Tierney, *1 Aniruddh Patel #2, Mara Breen^3 * Department of Psychological Sciences, Birkbeck, University of London, United Kingdom # Department
More informationCorrelation between Groovy Singing and Words in Popular Music
Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Correlation between Groovy Singing and Words in Popular Music Yuma Sakabe, Katsuya Takase and Masashi
More informationMOTIVATION AGENDA MUSIC, EMOTION, AND TIMBRE CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS
MOTIVATION Thank you YouTube! Why do composers spend tremendous effort for the right combination of musical instruments? CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS
More informationMultidimensional analysis of interdependence in a string quartet
International Symposium on Performance Science The Author 2013 ISBN tbc All rights reserved Multidimensional analysis of interdependence in a string quartet Panos Papiotis 1, Marco Marchini 1, and Esteban
More informationSupervised Musical Source Separation from Mono and Stereo Mixtures based on Sinusoidal Modeling
Supervised Musical Source Separation from Mono and Stereo Mixtures based on Sinusoidal Modeling Juan José Burred Équipe Analyse/Synthèse, IRCAM burred@ircam.fr Communication Systems Group Technische Universität
More informationMusical Instrument Identification based on F0-dependent Multivariate Normal Distribution
Musical Instrument Identification based on F0-dependent Multivariate Normal Distribution Tetsuro Kitahara* Masataka Goto** Hiroshi G. Okuno* *Grad. Sch l of Informatics, Kyoto Univ. **PRESTO JST / Nat
More informationPitch is one of the most common terms used to describe sound.
ARTICLES https://doi.org/1.138/s41562-17-261-8 Diversity in pitch perception revealed by task dependence Malinda J. McPherson 1,2 * and Josh H. McDermott 1,2 Pitch conveys critical information in speech,
More informationAbout Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance
Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About
More informationClassification of Timbre Similarity
Classification of Timbre Similarity Corey Kereliuk McGill University March 15, 2007 1 / 16 1 Definition of Timbre What Timbre is Not What Timbre is A 2-dimensional Timbre Space 2 3 Considerations Common
More informationRecognising Cello Performers Using Timbre Models
Recognising Cello Performers Using Timbre Models Magdalena Chudy and Simon Dixon Abstract In this paper, we compare timbre features of various cello performers playing the same instrument in solo cello
More informationAUTOMATIC TIMBRAL MORPHING OF MUSICAL INSTRUMENT SOUNDS BY HIGH-LEVEL DESCRIPTORS
AUTOMATIC TIMBRAL MORPHING OF MUSICAL INSTRUMENT SOUNDS BY HIGH-LEVEL DESCRIPTORS Marcelo Caetano, Xavier Rodet Ircam Analysis/Synthesis Team {caetano,rodet}@ircam.fr ABSTRACT The aim of sound morphing
More informationRegister Classification by Timbre
Register Classification by imbre Claus Weihs 1, Christoph Reuter 2, and Uwe Ligges 1 1 University of Dortmund, Department of Statistics 44221 Dortmund, Germany 2 Musikwissenschaftliches Institut, Universität
More informationJOURNAL OF BUILDING ACOUSTICS. Volume 20 Number
Early and Late Support Measured over Various Distances: The Covered versus Open Part of the Orchestra Pit by R.H.C. Wenmaekers and C.C.J.M. Hak Reprinted from JOURNAL OF BUILDING ACOUSTICS Volume 2 Number
More informationMusic BCI ( )
Music BCI (006-2015) Matthias Treder, Benjamin Blankertz Technische Universität Berlin, Berlin, Germany September 5, 2016 1 Introduction We investigated the suitability of musical stimuli for use in a
More informationTEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04S 7/00 ( ) H04R 25/00 (2006.
(19) TEPZZ 94 98 A_T (11) EP 2 942 982 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 11.11. Bulletin /46 (1) Int Cl.: H04S 7/00 (06.01) H04R /00 (06.01) (21) Application number: 141838.7
More informationTEPZZ 94 98_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2015/46
(19) TEPZZ 94 98_A_T (11) EP 2 942 981 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 11.11.1 Bulletin 1/46 (1) Int Cl.: H04S 7/00 (06.01) H04R /00 (06.01) (21) Application number: 1418384.0
More informationAn Accurate Timbre Model for Musical Instruments and its Application to Classification
An Accurate Timbre Model for Musical Instruments and its Application to Classification Juan José Burred 1,AxelRöbel 2, and Xavier Rodet 2 1 Communication Systems Group, Technical University of Berlin,
More informationSpectral Sounds Summary
Marco Nicoli colini coli Emmanuel Emma manuel Thibault ma bault ult Spectral Sounds 27 1 Summary Y they listen to music on dozens of devices, but also because a number of them play musical instruments
More informationA Computational Model for Discriminating Music Performers
A Computational Model for Discriminating Music Performers Efstathios Stamatatos Austrian Research Institute for Artificial Intelligence Schottengasse 3, A-1010 Vienna stathis@ai.univie.ac.at Abstract In
More informationThe Tone Height of Multiharmonic Sounds. Introduction
Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,
More informationA prototype system for rule-based expressive modifications of audio recordings
International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications
More informationModeling and Control of Expressiveness in Music Performance
Modeling and Control of Expressiveness in Music Performance SERGIO CANAZZA, GIOVANNI DE POLI, MEMBER, IEEE, CARLO DRIOLI, MEMBER, IEEE, ANTONIO RODÀ, AND ALVISE VIDOLIN Invited Paper Expression is an important
More informationSYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS
Published by Institute of Electrical Engineers (IEE). 1998 IEE, Paul Masri, Nishan Canagarajah Colloquium on "Audio and Music Technology"; November 1998, London. Digest No. 98/470 SYNTHESIS FROM MUSICAL
More informationMusic Information Retrieval. Juan P Bello
Music Information Retrieval Juan P Bello What is MIR? Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key
More informationSTUDY OF THE PERCEIVED QUALITY OF SAXOPHONE REEDS BY A PANEL OF MUSICIANS
STUDY OF THE PERCEIVED QUALITY OF SAXOPHONE REEDS BY A PANEL OF MUSICIANS Jean-François Petiot Pierric Kersaudy LUNAM Université, Ecole Centrale de Nantes CIRMMT, Schulich School of Music, McGill University
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 1pPPb: Psychoacoustics
More informationQuarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report An attempt to predict the masking effect of vowel spectra Gauffin, J. and Sundberg, J. journal: STL-QPSR volume: 15 number: 4 year:
More informationhprints , version 1-1 Oct 2008
Author manuscript, published in "Scientometrics 74, 3 (2008) 439-451" 1 On the ratio of citable versus non-citable items in economics journals Tove Faber Frandsen 1 tff@db.dk Royal School of Library and
More informationAN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS
AN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS Rui Pedro Paiva CISUC Centre for Informatics and Systems of the University of Coimbra Department
More informationTHE POTENTIAL FOR AUTOMATIC ASSESSMENT OF TRUMPET TONE QUALITY
12th International Society for Music Information Retrieval Conference (ISMIR 2011) THE POTENTIAL FOR AUTOMATIC ASSESSMENT OF TRUMPET TONE QUALITY Trevor Knight Finn Upham Ichiro Fujinaga Centre for Interdisciplinary
More informationExperiments on tone adjustments
Experiments on tone adjustments Jesko L. VERHEY 1 ; Jan HOTS 2 1 University of Magdeburg, Germany ABSTRACT Many technical sounds contain tonal components originating from rotating parts, such as electric
More informationOBSERVED DIFFERENCES IN RHYTHM BETWEEN PERFORMANCES OF CLASSICAL AND JAZZ VIOLIN STUDENTS
OBSERVED DIFFERENCES IN RHYTHM BETWEEN PERFORMANCES OF CLASSICAL AND JAZZ VIOLIN STUDENTS Enric Guaus, Oriol Saña Escola Superior de Música de Catalunya {enric.guaus,oriol.sana}@esmuc.cat Quim Llimona
More informationAn interdisciplinary approach to audio effect classification
An interdisciplinary approach to audio effect classification Vincent Verfaille, Catherine Guastavino Caroline Traube, SPCL / CIRMMT, McGill University GSLIS / CIRMMT, McGill University LIAM / OICM, Université
More informationLEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS
10 th International Society for Music Information Retrieval Conference (ISMIR 2009) October 26-30, 2009, Kobe, Japan LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS Zafar Rafii
More informationPREDICTING THE PERCEIVED SPACIOUSNESS OF STEREOPHONIC MUSIC RECORDINGS
PREDICTING THE PERCEIVED SPACIOUSNESS OF STEREOPHONIC MUSIC RECORDINGS Andy M. Sarroff and Juan P. Bello New York University andy.sarroff@nyu.edu ABSTRACT In a stereophonic music production, music producers
More informationPreferred acoustical conditions for musicians on stage with orchestra shell in multi-purpose halls
Toronto, Canada International Symposium on Room Acoustics 2013 June 9-11 ISRA 2013 Preferred acoustical conditions for musicians on stage with orchestra shell in multi-purpose halls Hansol Lim (lim90128@gmail.com)
More informationAutomatic Laughter Detection
Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional
More informationMUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES
MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES PACS: 43.60.Lq Hacihabiboglu, Huseyin 1,2 ; Canagarajah C. Nishan 2 1 Sonic Arts Research Centre (SARC) School of Computer Science Queen s University
More informationInstrument Timbre Transformation using Gaussian Mixture Models
Instrument Timbre Transformation using Gaussian Mixture Models Panagiotis Giotis MASTER THESIS UPF / 2009 Master in Sound and Music Computing Master thesis supervisors: Jordi Janer, Fernando Villavicencio
More informationReal-time magnetic resonance imaging investigation of resonance tuning in soprano singing
E. Bresch and S. S. Narayanan: JASA Express Letters DOI: 1.1121/1.34997 Published Online 11 November 21 Real-time magnetic resonance imaging investigation of resonance tuning in soprano singing Erik Bresch
More informationHUMANS have a remarkable ability to recognize objects
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 21, NO. 9, SEPTEMBER 2013 1805 Musical Instrument Recognition in Polyphonic Audio Using Missing Feature Approach Dimitrios Giannoulis,
More informationOn time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance
RHYTHM IN MUSIC PERFORMANCE AND PERCEIVED STRUCTURE 1 On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance W. Luke Windsor, Rinus Aarts, Peter
More informationAutomatic music transcription
Music transcription 1 Music transcription 2 Automatic music transcription Sources: * Klapuri, Introduction to music transcription, 2006. www.cs.tut.fi/sgn/arg/klap/amt-intro.pdf * Klapuri, Eronen, Astola:
More informationThe influence of Room Acoustic Aspects on the Noise Exposure of Symphonic Orchestra Musicians
www.akutek.info PRESENTS The influence of Room Acoustic Aspects on the Noise Exposure of Symphonic Orchestra Musicians by R. H. C. Wenmaekers, C. C. J. M. Hak and L. C. J. van Luxemburg Abstract Musicians
More informationOBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES
OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,
More informationUniversity of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal.
Roles for Spectral Centroid and Other Factors in Determining "Blended" Instrument Pairings in Orchestration Author(s): Gregory J. Sandell Source: Music Perception: An Interdisciplinary Journal, Vol. 13,
More informationFurther Topics in MIR
Tutorial Automatisierte Methoden der Musikverarbeitung 47. Jahrestagung der Gesellschaft für Informatik Further Topics in MIR Meinard Müller, Christof Weiss, Stefan Balke International Audio Laboratories
More informationPsychophysiological measures of emotional response to Romantic orchestral music and their musical and acoustic correlates
Psychophysiological measures of emotional response to Romantic orchestral music and their musical and acoustic correlates Konstantinos Trochidis, David Sears, Dieu-Ly Tran, Stephen McAdams CIRMMT, Department
More informationRecognising Cello Performers using Timbre Models
Recognising Cello Performers using Timbre Models Chudy, Magdalena; Dixon, Simon For additional information about this publication click this link. http://qmro.qmul.ac.uk/jspui/handle/123456789/5013 Information
More informationA SEGMENTAL SPECTRO-TEMPORAL MODEL OF MUSICAL TIMBRE
A SEGMENTAL SPECTRO-TEMPORAL MODEL OF MUSICAL TIMBRE Juan José Burred, Axel Röbel Analysis/Synthesis Team, IRCAM Paris, France {burred,roebel}@ircam.fr ABSTRACT We propose a new statistical model of musical
More informationHong Kong University of Science and Technology 2 The Information Systems Technology and Design Pillar,
Musical Timbre and Emotion: The Identification of Salient Timbral Features in Sustained Musical Instrument Tones Equalized in Attack Time and Spectral Centroid Bin Wu 1, Andrew Horner 1, Chung Lee 2 1
More informationExtending Interactive Aural Analysis: Acousmatic Music
Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.
More informationSONG HUI CHON. EDUCATION McGill University Montreal, QC, Canada Doctor of Philosophy in Music Technology (Advisor: Dr.
SONG HUI CHON Electrical, Computer, and Telecommunications Engineering Technology, College of Applied Science and Technology, Rochester Institute of Technology, Rochester, NY 14623, United States EDUCATION
More informationEffect of task constraints on the perceptual. evaluation of violins
Manuscript Click here to download Manuscript: SaitisManuscriptRevised.tex Saitis et al.: Perceptual evaluation of violins 1 Effect of task constraints on the perceptual evaluation of violins Charalampos
More informationSharp as a Tack, Bright as a Button: Timbral Metamorphoses in Saariaho s Sept Papillons
Society for Music Theory Milwaukee, WI November 7 th, 2014 Sharp as a Tack, Bright as a Button: Timbral Metamorphoses in Saariaho s Sept Papillons Nate Mitchell Indiana University Jacobs School of Music
More informationMUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES
MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES Jun Wu, Yu Kitano, Stanislaw Andrzej Raczynski, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono and Shigeki Sagayama The Graduate
More informationTempo and Beat Analysis
Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:
More informationPractice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers
Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2010, Sydney and Katoomba, Australia Practice makes less imperfect:
More informationAnalysis, Synthesis, and Perception of Musical Sounds
Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music James W. Beauchamp Editor University of Illinois at Urbana, USA 4y Springer Contents Preface Acknowledgments vii xv 1. Analysis
More informationLEARNING SPECTRAL FILTERS FOR SINGLE- AND MULTI-LABEL CLASSIFICATION OF MUSICAL INSTRUMENTS. Patrick Joseph Donnelly
LEARNING SPECTRAL FILTERS FOR SINGLE- AND MULTI-LABEL CLASSIFICATION OF MUSICAL INSTRUMENTS by Patrick Joseph Donnelly A dissertation submitted in partial fulfillment of the requirements for the degree
More informationThe Psychology of Music
The Psychology of Music Third Edition Edited by Diana Deutsch Department of Psychology University of California, San Diego La Jolla, California AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS
More information