Music out of Digital Data
|
|
- Marjorie Curtis
- 6 years ago
- Views:
Transcription
1 1 Teasing the Music out of Digital Data Matthias Mauch November, 2012
2 Me come from Unna Diplom in maths at Uni Rostock (2005) PhD at Queen Mary: Automatic Chord Transcription from Audio Using Computational Models of Musical Context (2010) since then: AIST, Japan and Last.fm now: Research Fellow, Lecturer at Queen Mary, University of London my website:
3 Centre for Digital Music C4DM is part of School of Electronic Engineering and Computer Science at Queen Mary, University of London ~10 years old (founded in 2003) led by Mark Plumbley; more than 50 full-time members: academics (professors, lecturers), research staff, research students, guests
4 Centre for Digital Music C4DM is part of School of Electronic Engineering and Computer Science at Queen Mary, University of London ~10 years old (founded in 2003) led by Mark Plumbley; more than 50 full-time members: academics (professors, lecturers), research staff, research students, guests
5 Areas in the C4DM Audio Engineering: auto-mixing, feedback-elimination,... Josh Reiss Interactional Sound & Music: interfaces for interaction with music,... Nick Bryan-Kinns Machine Listening: sparse models of audio, object coding, nonmusical/speech sound classification,... Mark Plumbley Music Informatics: automatic transcription, my area music classification and retrieval (by genre, mood, similarity), segmentation,... Simon Dixon Music Cognition: models of music in human brains... Geraint Wiggins, Marcus Pearce New Research Areas include performance studies and augmented musical instruments... Elaine Chew, Andrew McPherson
6 Areas in the C4DM Audio Engineering: auto-mixing, feedback-elimination,... Josh Reiss Interactional Sound & Music: interfaces for interaction with music,... Nick Bryan-Kinns Machine Listening: sparse models of audio, object coding, nonmusical/speech sound classification,... Mark Plumbley Music Informatics: automatic my transcription, area music classification and retrieval (by genre, mood, similarity), segmentation,... Simon Dixon Music Cognition: models of music in human brains... Geraint Wiggins, Marcus Pearce New Research Areas include performance studies and augmented musical instruments... Elaine Chew, Andrew McPherson
7 Areas in the C4DM Audio Engineering: auto-mixing, feedback-elimination,... Josh Reiss Interactional Sound & Music: interfaces for interaction with music,... Nick Bryan-Kinns Machine Listening: sparse models of audio, object coding, nonmusical/speech sound classification,... Mark Plumbley Music Informatics: automatic my transcription, area music classification and retrieval (by genre, mood, similarity), segmentation,... Simon Dixon Music Cognition: models of music in human brains... Geraint Wiggins, Marcus Pearce New Research Areas include performance studies and augmented musical instruments... Elaine Chew, Andrew McPherson
8 Music Informatics my area, led by Simon Dixon harmony analysis: automatic chord transcription, chord progressions, key detection transcription: multiple fundamental frequency estimation, semiautomatic techniques music classification: genre classification, mood classification other stuff: analysis of violoncello timbre in recordings, automatic classification of harpsichord temperament, beat tracking, drum patterns...
9 My work PhD audio chord transcription post-doc lyrics-to-audio alignment, Songle chord/key/beat tracking Research Fellow at Last.fm Driver s Seat harpsichord tuning estimation DarwinTunes analysis of musical evolution
10 Audio Chord Transcription I metric pos. M i 1 M i DBN models key K i 1 K i musical context [1][2] bass, key, metric chord C i 1 C i position bass B i 1 B i 2012 state of the art adaptation: Ni et al. [3] bass chroma X bs i 1 X bs i treble chroma X tr i 1 X tr i
11 Audio Chord Transcription I metric pos. M i 1 M i DBN models full plain musical context [1][2] full MB full M mean overlap rank key gvmm!dpoufyu! npefmk i 1 K i bass, key, metric Weller al. full MBK chord C i 1 C i position 2012 state of the art adaptation: Ni et al. [3] bass significant improvement in chord transcription bass chroma B i 1 X bs i 1 B i X bs i treble chroma X tr i 1 X tr i
12 Audio Chord Transcription I metric pos. M i 1 M i DBN models key full plain mean overlap rank K i 1 gvmm!dpoufyu npefm K i musical context [1][2] full M full MB bass, key, metric chord full MBK Weller et al. C i 1 C i position 2012 state of the art adaptation: Ni et al. [3] bass bass chroma B i 1 X bs i 1 B i significant improvement in chord transcription X bs i treble chroma X tr i 1 X tr i
13 Audio Chord Transcription II averaging features across repeated song segments [4] non-systematic noise is attenuated better results automatic segmentation part n1 part A part B part A pa chord correct using auto seg. chord correct baseline meth time/s
14 Figure 6.7: Song-wise improvement in RCO for th Audio Chord Transcription II 60 averaging features across repeated song segments [4] non-systematic noise is attenuated better results number of songs accuracy of 160 most songs improves automatic segmentation part n1 part A part B part A pa song chord correct using auto seg chord correct baseline meth time/s improvement in percentage points (a) autobeat-autoseg against autobeatnoseg (b no
15 Chordino & NNLS Chroma NNLS Chroma [5] Vamp plugin (e.g. for Sonic Visualiser): download: source: projects/nnls-chroma contains Chordino a basic chord estimator
16 SongPrompter... first verse: all lyrics and chords given subsequent verse: only lyrics; chords are omitted blank line separates song segments X X Verse: Bm G D A Once you were my love, now just a friend, Bm G D A What a cruel thing to pretend. Bm G D A A mistake I made, there was a price to pay. Bm G D A In tears you walked away, Verse: When I see you hand in hand with some other I slowly go insane. Memories of the way we used to be... Oh God, please stop the pain. Chorus: D G Em A Oh, once in a life time D/F# G A Nothing can last forever D/F# G I know it's not too late A7 F#/A# Would you let this be our fate? Bm G Asus4 I know you'd be right but please stay A Don't walk away Instrumental: Bm G D A Bm G A heading defines segment type ### Chorus: Oh, once in a life time Nothing can last forever I know it's not too late Would you let this be our fate? I know you'd be right but please stay. Don't walk away....
17 Hiromasa Fujihara Masataka Goto ed Industrial Science and Technology (AIST), Japan fujihara, SongPrompter methods. In the ov model rd model tic audio ns found the first d method e incomegmentaon availhe lyrics -labelled e use our audio MFCCs chroma Figure 1: Integrating chord information in the lyricsto-audio alignment process (schematic illustration). The chords printed black represent chord changes, grey chords are continued from a prior chord change. Word-chord
18 SongPrompter automatic alignment works best with speech and chord features [6] visual display from automatic alignment lyrics, segmentation and chords audio playback original audio auto-extracted bass and drum track
19 SongPrompter automatic alignment works best with speech and chord features [6] visual display from automatic alignment lyrics, segmentation and chords audio playback original audio auto-extracted bass and drum track lbsbplf!gps! hvjubsjtut"
20 SongPrompter demo
21 Songle Web Service
22 Songle.jp web service [7] adding interaction engaging user experience insights through automatic annotations anyone can contribute it s social! use for MIR research crowd-sourcing more training data exposure to broader audience
23 Songle.jp web service [7] adding interaction engaging user experience insights through automatic annotations LIVE AND anyone can contribute it s social! ONLINE use for MIR research crowd-sourcing more training data exposure to broader audience
24
25 Driver s Seat Last.fm already have genre tags, similarity we want a complement: intuitively understandable audio features harmonic creativity (structural change [8]) noisiness, energy, rhythmic regularity,... Spotify app based on Last.fm audio API
26 Driver s Seat Esjwfs t! Tfbu Fyusbdujpo Bvejp Gfbuvsf BQJ Tqpujgz! JE BQJ Tqpujgz Bqqt BQJ Bvejp
27 Driver s Seat
28 DarwinTunes A G n N 100 project by Bob MacCallum and Armand Leroi at Imperial College; paper: [9] phenotypeproduction & rating selection use genetic algorithms to evolve short musical loops reproduction, recombination & mutation G n selection process is web-based, crowd-sourced (>6000 unique voters) evolutionary analysis based on fitness (votes) and phenotype 2.5 (sound surface) B M (mean rating) sound surface: a scientific application of music informatics
29 DarwinTunes A -2.8 B C L -3.2 R C D Chordino log-likelihood and Rhythmic Complexity measures both indicate a drastic rise and subsequent stagnation plateau best explained by fragile features despite the existence of better tunes transmission imposes limit
30 Zukunftsmusik Drum transcription. Improve drum transcription by language modelling from a large corpus of symbolic drum patterns Singing research. Make a user interface Tony for the quick and simple annotation of pitches in monophonic audio. How do singers correct pitch errors? Do we have a background tuning process in our heads? Collaborate with ethnomusicologists, musicians, psychologists...
31 References [1] Mauch, M., & Dixon, S. (2010). Simultaneous Estimation of Chords and Musical Context from Audio. IEEE Transactions on Audio, Speech, and Language Processing, 18(6), [2] Mauch, M. (2010). Automatic Chord Transcription from Audio Using Computational Models of Musical Context. Queen Mary University of London. [3] Ni, Y., McVicar, M., Santos-Rodriguez, R., & De Bie, T. (2012). An end-to-end machine learning system for harmonic analysis of music. IEEE Transactions on Audio, Speech, and Language Processing, in print. [4] Mauch, M., Noland, K. C., & Dixon, S. (2009). Using Musical Structure to Enhance Automatic Chord Transcription. Proceedings of the 10th International Conference on Music Information Retrieval (ISMIR 2009). [5] Mauch, M., & Dixon, S. (2010). Approximate Note Transcription for the Improved Identification of Difficult Chords. Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR 2010). [6] Mauch, M., Fujihara, H. & Goto, M. (2012). Integrating Additional Chord Information Into HMM-Based Lyrics-to-Audio Alignment. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), [7] Goto, M., Yoshii, K., Fujihara, H., Mauch, M., & Nakano, T. (2011). Songle: A Web Service for Active Music Listening Improved by User Contributions. Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011) (pp ). [8] Mauch, M., & Levy, M. (2011). Structural Change on Multiple Time Scales as a Correlate of Musical Complexity. Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011) (pp ). [9] MacCallum, B., Mauch, M., Burt, A., & Leroi, A. M. (2012). Evolution of Music by Public Choice. Proceedings of the National Academy of Sciences of the United States of America, 109(30),
Computational Modelling of Harmony
Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond
More informationEfficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications. Matthias Mauch Chris Cannam György Fazekas
Efficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications Matthias Mauch Chris Cannam György Fazekas! 1 Matthias Mauch, Chris Cannam, George Fazekas Problem Intonation in Unaccompanied
More informationAUTOMASHUPPER: AN AUTOMATIC MULTI-SONG MASHUP SYSTEM
AUTOMASHUPPER: AN AUTOMATIC MULTI-SONG MASHUP SYSTEM Matthew E. P. Davies, Philippe Hamel, Kazuyoshi Yoshii and Masataka Goto National Institute of Advanced Industrial Science and Technology (AIST), Japan
More informationMaking Sense of Sound and Music
Making Sense of Sound and Music Mark Plumbley Centre for Digital Music Queen Mary, University of London CREST Symposium on Human-Harmonized Information Technology Kyoto, Japan 1 April 2012 Overview Separating
More informationDAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval
DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca
More informationUSING MUSICAL STRUCTURE TO ENHANCE AUTOMATIC CHORD TRANSCRIPTION
10th International Society for Music Information Retrieval Conference (ISMIR 2009) USING MUSICL STRUCTURE TO ENHNCE UTOMTIC CHORD TRNSCRIPTION Matthias Mauch, Katy Noland, Simon Dixon Queen Mary University
More informationMusic Similarity and Cover Song Identification: The Case of Jazz
Music Similarity and Cover Song Identification: The Case of Jazz Simon Dixon and Peter Foster s.e.dixon@qmul.ac.uk Centre for Digital Music School of Electronic Engineering and Computer Science Queen Mary
More informationTHE importance of music content analysis for musical
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2007 333 Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With
More informationSparse Representation Classification-Based Automatic Chord Recognition For Noisy Music
Journal of Information Hiding and Multimedia Signal Processing c 2018 ISSN 2073-4212 Ubiquitous International Volume 9, Number 2, March 2018 Sparse Representation Classification-Based Automatic Chord Recognition
More informationThe Million Song Dataset
The Million Song Dataset AUDIO FEATURES The Million Song Dataset There is no data like more data Bob Mercer of IBM (1985). T. Bertin-Mahieux, D.P.W. Ellis, B. Whitman, P. Lamere, The Million Song Dataset,
More informationCrossroads: Interactive Music Systems Transforming Performance, Production and Listening
Crossroads: Interactive Music Systems Transforming Performance, Production and Listening BARTHET, M; Thalmann, F; Fazekas, G; Sandler, M; Wiggins, G; ACM Conference on Human Factors in Computing Systems
More informationEffects of acoustic degradations on cover song recognition
Signal Processing in Acoustics: Paper 68 Effects of acoustic degradations on cover song recognition Julien Osmalskyj (a), Jean-Jacques Embrechts (b) (a) University of Liège, Belgium, josmalsky@ulg.ac.be
More informationSTRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY
STRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY Matthias Mauch Mark Levy Last.fm, Karen House, 1 11 Bache s Street, London, N1 6DL. United Kingdom. matthias@last.fm mark@last.fm
More informationMethods for the automatic structural analysis of music. Jordan B. L. Smith CIRMMT Workshop on Structural Analysis of Music 26 March 2010
1 Methods for the automatic structural analysis of music Jordan B. L. Smith CIRMMT Workshop on Structural Analysis of Music 26 March 2010 2 The problem Going from sound to structure 2 The problem Going
More informationA SCORE-INFORMED PIANO TUTORING SYSTEM WITH MISTAKE DETECTION AND SCORE SIMPLIFICATION
A SCORE-INFORMED PIANO TUTORING SYSTEM WITH MISTAKE DETECTION AND SCORE SIMPLIFICATION Tsubasa Fukuda Yukara Ikemiya Katsutoshi Itoyama Kazuyoshi Yoshii Graduate School of Informatics, Kyoto University
More informationSinger Traits Identification using Deep Neural Network
Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic
More informationContent-based music retrieval
Music retrieval 1 Music retrieval 2 Content-based music retrieval Music information retrieval (MIR) is currently an active research area See proceedings of ISMIR conference and annual MIREX evaluations
More informationShades of Music. Projektarbeit
Shades of Music Projektarbeit Tim Langer LFE Medieninformatik 28.07.2008 Betreuer: Dominikus Baur Verantwortlicher Hochschullehrer: Prof. Dr. Andreas Butz LMU Department of Media Informatics Projektarbeit
More informationMUSI-6201 Computational Music Analysis
MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)
More informationMusic Information Retrieval
CTP 431 Music and Audio Computing Music Information Retrieval Graduate School of Culture Technology (GSCT) Juhan Nam 1 Introduction ü Instrument: Piano ü Composer: Chopin ü Key: E-minor ü Melody - ELO
More informationChord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations
Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations Hendrik Vincent Koops 1, W. Bas de Haas 2, Jeroen Bransen 2, and Anja Volk 1 arxiv:1706.09552v1 [cs.sd]
More informationProbabilistic and Logic-Based Modelling of Harmony
Probabilistic and Logic-Based Modelling of Harmony Simon Dixon, Matthias Mauch, and Amélie Anglade Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@eecs.qmul.ac.uk
More informationIntroductions to Music Information Retrieval
Introductions to Music Information Retrieval ECE 272/472 Audio Signal Processing Bochen Li University of Rochester Wish List For music learners/performers While I play the piano, turn the page for me Tell
More informationDrum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods
Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationLecture 15: Research at LabROSA
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 15: Research at LabROSA 1. Sources, Mixtures, & Perception 2. Spatial Filtering 3. Time-Frequency Masking 4. Model-Based Separation Dan Ellis Dept. Electrical
More informationSubjective evaluation of common singing skills using the rank ordering method
lma Mater Studiorum University of ologna, ugust 22-26 2006 Subjective evaluation of common singing skills using the rank ordering method Tomoyasu Nakano Graduate School of Library, Information and Media
More informationProbabilist modeling of musical chord sequences for music analysis
Probabilist modeling of musical chord sequences for music analysis Christophe Hauser January 29, 2009 1 INTRODUCTION Computer and network technologies have improved consequently over the last years. Technology
More informationTOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC
TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu
More informationComposer Identification of Digital Audio Modeling Content Specific Features Through Markov Models
Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Aric Bartle (abartle@stanford.edu) December 14, 2012 1 Background The field of composer recognition has
More informationSemi-supervised Musical Instrument Recognition
Semi-supervised Musical Instrument Recognition Master s Thesis Presentation Aleksandr Diment 1 1 Tampere niversity of Technology, Finland Supervisors: Adj.Prof. Tuomas Virtanen, MSc Toni Heittola 17 May
More informationRechnergestützte Methoden für die Musikethnologie: Tool time!
Rechnergestützte Methoden für die Musikethnologie: Tool time! André Holzapfel MIAM, ITÜ, and Boğaziçi University, Istanbul, Turkey andre@rhythmos.org 02/2015 - Göttingen André Holzapfel (BU/ITU) Tool time!
More informationSubjective Similarity of Music: Data Collection for Individuality Analysis
Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp
More informationAudio Feature Extraction for Corpus Analysis
Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends
More informationThe song remains the same: identifying versions of the same piece using tonal descriptors
The song remains the same: identifying versions of the same piece using tonal descriptors Emilia Gómez Music Technology Group, Universitat Pompeu Fabra Ocata, 83, Barcelona emilia.gomez@iua.upf.edu Abstract
More informationData Driven Music Understanding
Data Driven Music Understanding Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Engineering, Columbia University, NY USA http://labrosa.ee.columbia.edu/ 1. Motivation:
More informationA System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models
A System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models Kyogu Lee Center for Computer Research in Music and Acoustics Stanford University, Stanford CA 94305, USA
More informationCTP431- Music and Audio Computing Music Information Retrieval. Graduate School of Culture Technology KAIST Juhan Nam
CTP431- Music and Audio Computing Music Information Retrieval Graduate School of Culture Technology KAIST Juhan Nam 1 Introduction ü Instrument: Piano ü Genre: Classical ü Composer: Chopin ü Key: E-minor
More informationToward Music Listening Interfaces in the Future
No. 1 Toward Music Listening Interfaces in the Future AIST (National Institute of Advanced Industrial Science and Technology) AIST Masataka Goto 2010/10/19 Microsoft Research Asia Faculty Summit 2010 No.
More informationAutomatic Chord Transcription from Audio Using Computational Models of Musical Context
Automatic Chord Transcription from Audio Using Computational Models of Musical Context Mauch, M For additional information about this publication click this link. http://qmro.qmul.ac.uk/jspui/handle/123456789/5379
More informationBeethoven, Bach, and Billions of Bytes
Lecture Music Processing Beethoven, Bach, and Billions of Bytes New Alliances between Music and Computer Science Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de
More informationARECENT emerging area of activity within the music information
1726 IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 22, NO. 12, DECEMBER 2014 AutoMashUpper: Automatic Creation of Multi-Song Music Mashups Matthew E. P. Davies, Philippe Hamel,
More informationDAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval
DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Kyogu Lee
More informationEVALUATING AUTOMATIC POLYPHONIC MUSIC TRANSCRIPTION
EVALUATING AUTOMATIC POLYPHONIC MUSIC TRANSCRIPTION Andrew McLeod University of Edinburgh A.McLeod-5@sms.ed.ac.uk Mark Steedman University of Edinburgh steedman@inf.ed.ac.uk ABSTRACT Automatic Music Transcription
More informationProc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music
A Melody Detection User Interface for Polyphonic Music Sachin Pant, Vishweshwara Rao, and Preeti Rao Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai 400076, India Email:
More informationBreakscience. Technological and Musicological Research in Hardcore, Jungle, and Drum & Bass
Breakscience Technological and Musicological Research in Hardcore, Jungle, and Drum & Bass Jason A. Hockman PhD Candidate, Music Technology Area McGill University, Montréal, Canada Overview 1 2 3 Hardcore,
More informationOutline. Why do we classify? Audio Classification
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify
More informationCULTIVATING VOCAL ACTIVITY DETECTION FOR MUSIC AUDIO SIGNALS IN A CIRCULATION-TYPE CROWDSOURCING ECOSYSTEM
014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) CULTIVATING VOCAL ACTIVITY DETECTION FOR MUSIC AUDIO SIGNALS IN A CIRCULATION-TYPE CROWDSOURCING ECOSYSTEM Kazuyoshi
More informationRetrieval of textual song lyrics from sung inputs
INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Retrieval of textual song lyrics from sung inputs Anna M. Kruspe Fraunhofer IDMT, Ilmenau, Germany kpe@idmt.fraunhofer.de Abstract Retrieving the
More informationAutomatic Rhythmic Notation from Single Voice Audio Sources
Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung
More informationMusic Representations. Beethoven, Bach, and Billions of Bytes. Music. Research Goals. Piano Roll Representation. Player Piano (1900)
Music Representations Lecture Music Processing Sheet Music (Image) CD / MP3 (Audio) MusicXML (Text) Beethoven, Bach, and Billions of Bytes New Alliances between Music and Computer Science Dance / Motion
More informationA PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES
12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou
More informationPitfalls and Windfalls in Corpus Studies of Pop/Rock Music
Introduction Hello, my talk today is about corpus studies of pop/rock music specifically, the benefits or windfalls of this type of work as well as some of the problems. I call these problems pitfalls
More informationMODELS of music begin with a representation of the
602 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 Modeling Music as a Dynamic Texture Luke Barrington, Student Member, IEEE, Antoni B. Chan, Member, IEEE, and
More informationA Survey of Audio-Based Music Classification and Annotation
A Survey of Audio-Based Music Classification and Annotation Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang IEEE Trans. on Multimedia, vol. 13, no. 2, April 2011 presenter: Yin-Tzu Lin ( 阿孜孜 ^.^)
More informationSinger Recognition and Modeling Singer Error
Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing
More informationMusic Information Retrieval. Juan Pablo Bello MPATE-GE 2623 Music Information Retrieval New York University
Music Information Retrieval Juan Pablo Bello MPATE-GE 2623 Music Information Retrieval New York University 1 Juan Pablo Bello Office: Room 626, 6th floor, 35 W 4th Street (ext. 85736) Office Hours: Wednesdays
More informationHarmonyMixer: Mixing the Character of Chords among Polyphonic Audio
HarmonyMixer: Mixing the Character of Chords among Polyphonic Audio Satoru Fukayama Masataka Goto National Institute of Advanced Industrial Science and Technology (AIST), Japan {s.fukayama, m.goto} [at]
More information638 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010
638 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 A Modeling of Singing Voice Robust to Accompaniment Sounds and Its Application to Singer Identification and Vocal-Timbre-Similarity-Based
More informationTOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION
TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION Jordan Hochenbaum 1,2 New Zealand School of Music 1 PO Box 2332 Wellington 6140, New Zealand hochenjord@myvuw.ac.nz
More informationTempo and Beat Analysis
Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:
More informationFANTASTIC: A Feature Analysis Toolbox for corpus-based cognitive research on the perception of popular music
FANTASTIC: A Feature Analysis Toolbox for corpus-based cognitive research on the perception of popular music Daniel Müllensiefen, Psychology Dept Geraint Wiggins, Computing Dept Centre for Cognition, Computation
More informationMusic Information Retrieval
Music Information Retrieval When Music Meets Computer Science Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Berlin MIR Meetup 20.03.2017 Meinard Müller
More informationData-Driven Solo Voice Enhancement for Jazz Music Retrieval
Data-Driven Solo Voice Enhancement for Jazz Music Retrieval Stefan Balke1, Christian Dittmar1, Jakob Abeßer2, Meinard Müller1 1International Audio Laboratories Erlangen 2Fraunhofer Institute for Digital
More informationUnisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web
Unisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web Keita Tsuzuki 1 Tomoyasu Nakano 2 Masataka Goto 3 Takeshi Yamada 4 Shoji Makino 5 Graduate School
More informationChord Classification of an Audio Signal using Artificial Neural Network
Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationBi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset
Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset Ricardo Malheiro, Renato Panda, Paulo Gomes, Rui Paiva CISUC Centre for Informatics and Systems of the University of Coimbra {rsmal,
More informationWeek 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University
Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based
More informationComputational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)
Computational Models of Music Similarity 1 Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST) Abstract The perceived similarity of two pieces of music is multi-dimensional,
More informationEvolutionary Computation Applied to Melody Generation
Evolutionary Computation Applied to Melody Generation Matt D. Johnson December 5, 2003 Abstract In recent years, the personal computer has become an integral component in the typesetting and management
More informationAutomatic Piano Music Transcription
Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening
More informationVideo-based Vibrato Detection and Analysis for Polyphonic String Music
Video-based Vibrato Detection and Analysis for Polyphonic String Music Bochen Li, Karthik Dinesh, Gaurav Sharma, Zhiyao Duan Audio Information Research Lab University of Rochester The 18 th International
More informationarxiv: v1 [cs.sd] 5 Apr 2017
REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen Research Center for Information Technology
More informationMusic Information Retrieval (MIR)
Ringvorlesung Perspektiven der Informatik Wintersemester 2011/2012 Meinard Müller Universität des Saarlandes und MPI Informatik meinard@mpi-inf.mpg.de Priv.-Doz. Dr. Meinard Müller 2007 Habilitation, Bonn
More informationSinger Identification
Singer Identification Bertrand SCHERRER McGill University March 15, 2007 Bertrand SCHERRER (McGill University) Singer Identification March 15, 2007 1 / 27 Outline 1 Introduction Applications Challenges
More informationUnisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web
Unisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web Keita Tsuzuki 1 Tomoyasu Nakano 2 Masataka Goto 3 Takeshi Yamada 4 Shoji Makino 5 Graduate School
More informationarxiv: v2 [cs.sd] 31 Mar 2017
On the Futility of Learning Complex Frame-Level Language Models for Chord Recognition arxiv:1702.00178v2 [cs.sd] 31 Mar 2017 Abstract Filip Korzeniowski and Gerhard Widmer Department of Computational Perception
More informationWeek 14 Music Understanding and Classification
Week 14 Music Understanding and Classification Roger B. Dannenberg Professor of Computer Science, Music & Art Overview n Music Style Classification n What s a classifier? n Naïve Bayesian Classifiers n
More informationMusic Information Retrieval (MIR)
Ringvorlesung Perspektiven der Informatik Sommersemester 2010 Meinard Müller Universität des Saarlandes und MPI Informatik meinard@mpi-inf.mpg.de Priv.-Doz. Dr. Meinard Müller 2007 Habilitation, Bonn 2007
More informationRhythm related MIR tasks
Rhythm related MIR tasks Ajay Srinivasamurthy 1, André Holzapfel 1 1 MTG, Universitat Pompeu Fabra, Barcelona, Spain 10 July, 2012 Srinivasamurthy et al. (UPF) MIR tasks 10 July, 2012 1 / 23 1 Rhythm 2
More informationPiano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15
Piano Transcription MUMT611 Presentation III 1 March, 2007 Hankinson, 1/15 Outline Introduction Techniques Comb Filtering & Autocorrelation HMMs Blackboard Systems & Fuzzy Logic Neural Networks Examples
More informationµtunes: A Study of Musicality Perception in an Evolutionary Context
µtunes: A Study of Musicality Perception in an Evolutionary Context Kirill Sidorov Robin Hawkins Andrew Jones David Marshall Cardiff University, UK K.Sidorov@cs.cardiff.ac.uk ontario.cs.cf.ac.uk/mutunes
More informationMusical Examination to Bridge Audio Data and Sheet Music
Musical Examination to Bridge Audio Data and Sheet Music Xunyu Pan, Timothy J. Cross, Liangliang Xiao, and Xiali Hei Department of Computer Science and Information Technologies Frostburg State University
More informationOBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES
OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,
More informationMusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface
MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface 1st Author 1st author's affiliation 1st line of address 2nd line of address Telephone number, incl. country code 1st author's
More informationDrumix: An Audio Player with Real-time Drum-part Rearrangement Functions for Active Music Listening
Vol. 48 No. 3 IPSJ Journal Mar. 2007 Regular Paper Drumix: An Audio Player with Real-time Drum-part Rearrangement Functions for Active Music Listening Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani,
More informationmir_eval: A TRANSPARENT IMPLEMENTATION OF COMMON MIR METRICS
mir_eval: A TRANSPARENT IMPLEMENTATION OF COMMON MIR METRICS Colin Raffel 1,*, Brian McFee 1,2, Eric J. Humphrey 3, Justin Salamon 3,4, Oriol Nieto 3, Dawen Liang 1, and Daniel P. W. Ellis 1 1 LabROSA,
More informationCharacteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals
Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals Eita Nakamura and Shinji Takaki National Institute of Informatics, Tokyo 101-8430, Japan eita.nakamura@gmail.com, takaki@nii.ac.jp
More informationPOST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS
POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music
More informationNon-chord Tone Identification
Non-chord Tone Identification Yaolong Ju Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) Schulich School of Music McGill University SIMSSA XII Workshop 2017 Aug. 7 th, 2017
More informationVoice & Music Pattern Extraction: A Review
Voice & Music Pattern Extraction: A Review 1 Pooja Gautam 1 and B S Kaushik 2 Electronics & Telecommunication Department RCET, Bhilai, Bhilai (C.G.) India pooja0309pari@gmail.com 2 Electrical & Instrumentation
More informationPaulo V. K. Borges. Flat 1, 50A, Cephas Av. London, UK, E1 4AR (+44) PRESENTATION
Paulo V. K. Borges Flat 1, 50A, Cephas Av. London, UK, E1 4AR (+44) 07942084331 vini@ieee.org PRESENTATION Electronic engineer working as researcher at University of London. Doctorate in digital image/video
More informationMusic Genre Classification and Variance Comparison on Number of Genres
Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques
More informationAN EVALUATION FRAMEWORK AND CASE STUDY FOR RHYTHMIC CONCATENATIVE SYNTHESIS
AN EVALUATION FRAMEWORK AND CASE STUDY FOR RHYTHMIC CONCATENATIVE SYNTHESIS Cárthach Ó Nuanáin, Perfecto Herrera, Sergi Jordà Music Technology Group Universitat Pompeu Fabra Barcelona {carthach.onuanain,
More informationLecture 9 Source Separation
10420CS 573100 音樂資訊檢索 Music Information Retrieval Lecture 9 Source Separation Yi-Hsuan Yang Ph.D. http://www.citi.sinica.edu.tw/pages/yang/ yang@citi.sinica.edu.tw Music & Audio Computing Lab, Research
More informationMusical Instrument Recognizer Instrogram and Its Application to Music Retrieval based on Instrumentation Similarity
Musical Instrument Recognizer Instrogram and Its Application to Music Retrieval based on Instrumentation Similarity Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata and Hiroshi G. Okuno
More informationMusic Structure Analysis
Lecture Music Processing Music Structure Analysis Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals
More informationABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC
ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk
More informationTIMBRE AND MELODY FEATURES FOR THE RECOGNITION OF VOCAL ACTIVITY AND INSTRUMENTAL SOLOS IN POLYPHONIC MUSIC
TIBE AND ELODY EATUES O TE ECOGNITION O VOCAL ACTIVITY AND INSTUENTAL SOLOS IN POLYPONIC USIC atthias auch iromasa ujihara Kazuyoshi Yoshii asataka Goto National Institute of Advanced Industrial Science
More informationth International Conference on Information Visualisation
2014 18th International Conference on Information Visualisation GRAPE: A Gradation Based Portable Visual Playlist Tomomi Uota Ochanomizu University Tokyo, Japan Email: water@itolab.is.ocha.ac.jp Takayuki
More information