A probabilistic framework for audio-based tonal key and chord recognition
|
|
- Julius Fleming
- 5 years ago
- Views:
Transcription
1 A probabilistic framework for audio-based tonal key and chord recognition Benoit Catteau 1, Jean-Pierre Martens 1, and Marc Leman 2 1 ELIS - Electronics & Information Systems, Ghent University, Gent (Belgium) Benoit.Catteau@elis.UGent.be 2 IPEM - Department of Musicology, Ghent University, Gent (Belgium) Marc.Leman@UGent.be Abstract A unified probabilistic framework for audio-based chord and tonal key recognition is described and evaluated. The proposed framework embodies an acoustic observation likelihood model and key & chord transition models. It is shown how to conceive these models and how to use music theory to link key/chord transition probabilities to perceptual similarities between keys/chords. The advantage of a theory based model is that it does not require any training, and consequently, that its performance is not affected by the quality of the available training data. 1 Introduction Tonal key and chord recognition from audio are important steps towards the construction of a mid-level representation of Western tonal music for e.g. Music Information Retrieval (MIR) applications. A straightforward approach to key recognition (e.g. Pauws (2004), Leman (2000)) is to represent the acoustic observations and the keys by chroma vectors and chroma profiles respectively, and to use an ad hoc distance measure to assess how well the observations match a suggested key profile. Well-known profiles are the Krumhansl and Kessler (1982) and Temperley (1999) profiles, and a popular distance measure is the cosine distance. The classical approach to chord recognition is one of key detection before chord recognition. Recently however, Shenoy and Wang (2005) proposed a 3-step algorithm performing chord detection first, key detection then, and finally, chord enhancement on the basis of high-level knowledge and key information. Our point of departure is that tonal key and chord recognition should preferably be accomplished simultaneously on the basis of a unified probabilistic framework. We will propose a segment-based framework that extends e.g. the frame-based HMM framework for chord detection proposed by Bello and Pickens (2005).
2 2 Benoit Catteau, Jean-Pierre Martens, and Marc Leman In the subsequent sections we provide a general outline (Section 2) and a detailed description (Sections 3 and 4) of our approach, as well as an experimental evaluation (Section 5) of our current implementation of this approach. 2 General outline of the approach Before introducing our probabilistic framework, we want to recall some basics about the links between notes, chords and keys in Western tonal music. The pitch of a periodic sound is usually mapped to a pitch class (a chroma) collecting all the pitches that are in an octave relation to each other. Chroma s are represented on a log-frequency scale of 1 octave long, and this chromatic scale is divided into 12 equal intervals, the borders of which are labeled as notes: A, As, B,.., Gs. A tonal key is represented by 7 eligible notes selected from the set of 12. Characteristics of a key are its tonic (the note with the lowest chroma) and the mode (major, minor harmonic,..) that was used to select the 7 notes starting from the tonic. A chord refers to a stack of three (=triad) or more notes sounding together during some time. It can be represented by a 12-bit binary chromatic vector with ones on the chord note positions and zeroes on the remaining positions. This vector leads to a unique chord label as soon as the key is available. Having explained the links between keys, chords and notes, we can now present our probabilistic framework. We suppose that an acoustic front-end has converted the audio into a sequence of N events which are presumed to represent individual chords. Each event is characterized by an acoustic observation vector x n, and the whole observation sequence is denoted as X = {x 1,.., x N }. The aim is now to assign key labels k n and chord chroma vectors c n to these events. More precisely, we seek for the sequence pair ( ˆK, Ĉ) that maximizes the posterior probability P (K, C X). By applying Bayes law, and by noting that the prior probability P (X) is independent of (K,C), one comes to the conclusion that the problem can also be formulated as ˆK, Ĉ = arg max P (K, C, X) = arg max P (K, C) P (X K, C) (1) K,C K,C By sticking to two key modes, namely the major and minor harmonic mode, and by only examining the 4 most important triads (major, minor, augmented and diminished) per tonic we achieve that only 48 chord vectors and 24 keys per event have to be tested. If we can then assume that the acoustic likelihood P (X K, C) can be factorized as P (X K, C) = N P (x n k n, c n ) (2) and if P (K, C) can be modeled by the following bigram music model n=1
3 A probabilistic framework for audio-based tonal key and chord recognition 3 P (K, C) = N P (k n, c n k n 1, c n 1 ) (3) n=1 then it is straightforward to show that the problem can be reformulated as ˆK, Ĉ = arg max K,C N P (k n k n 1, c n 1 )P (c n k n 1, c n 1, k n )P (x n k n, c n ) (4) n=1 The solution can be found by means of a Dynamic Programming search. In the subsequent two sections we describe the front-end that was used to construct the acoustic observations and the models that were developed to compute the probabilities involved. 3 The acoustic front-end The objective of the acoustic front-end is to segment the audio into chord and rest intervals and to create a chroma vector for each chord interval. 3.1 Frame-by-frame analysis The front-end first performs a frame-by-frame short-time power spectrum (STPS) analysis. The frames are 150 ms long and two subsequent frames overlap by 130 ms. The frames are Hamming windowed and the STPS is computed in 1024 points equidistantly spaced on a linear frequency scale. The STPS is then mapped to a log-frequency spectrum comprising 84 samples: 7 octaves (between the MIDI scores C1 and C8) and 12 samples per octave. By convolving this spectrum with a Hamming window of 1 octave wide, one obtains a so-called background spectrum. Subtracting this from the original spectrum leads to an enhanced log-frequency spectrum. By means of sub-harmonic summation (Terhardt et al. (1982)), the latter is converted to a sub-harmonic sum spectrum T (i), i = 0,..83 which is finally folded into one octave to yield the components of the chroma vector x of the analyzed frame: x m = 3.2 Segmentation 6 T n (12j + m), m = 0,.., 11 (5) j=0 The chroma vectors of the individual frames are used to perform a segmentation of the audio signal. A frame can either be appended to a previously started event or it can be assigned to a new event. The latter happens if the absolute value of the correlation between consecutive chroma vectors drops below a certain threshold.
4 4 Benoit Catteau, Jean-Pierre Martens, and Marc Leman On the basis of its mean frame energy each event is labeled as chord or rest and for each chord, a chroma vector is computed by first taking the mean chroma vector over its frames, and by then normalizing this mean vector so as to achieve that its elements sum up to 1. 4 Modeling the probabilities For solving Equation 4, one needs good models for the observation likelihoods P (x n k n, c n ), the key transition probabilities P (k n k n 1, c n 1 ) and the chord transition probabilities P (c n k n 1, c n 1, k n ). 4.1 Modeling the observation likelihoods The observation likelihood expresses how well the observations support a proposed chord hypothesis. Although they sum up to one, we assume weak dependencies among the vector components and propose to use the following model: P (x n k n, c n ) = 11 m=0 P (x nm c nm ), 11 m=0 x nm = 1 (6) In its most simple form this model requires two statistical distributions: P (x c = 1) and P (x c = 0) (x and c denote individual notes here). We have chosen for P (x 0) = G o (e x2 2σ 2 + P o ) x (0, 1) (7) P (x 1) = G 1 (e (x X) 2 2σ 2 + P o ) x (0, X) (8) = G 1 (1 + P o ) x (X, 1) (9) (see Figure 1) with G o and G 1 being normalization factors. Offset P o must preserve some evidence in case an expected large x nm is missing or an unexpected large x nm (e.g. caused by an odd harmonic of the pitch) is present. In our experiments X and σ were kept fixed to 0.33 and 0.13 respectively (these values seem to explain the observation statistics). 4.2 Modeling the key transition probabilities Normally it would take a large chord and key annotated music corpus to determine appropriate key and chord transition probabilities. However, we argue that (1) transitions between similar keys/chords are more likely to occur than transitions between less similar keys/chords, and (2) chords comprising the key tonic or fifth are more likely to appear than others. We therefore
5 A probabilistic framework for audio-based tonal key and chord recognition P(x 0) P(x 1) P P X x Fig. 1. Distributions (without normalization factors) to model the observation likelihoods of x given that the note chroma vector contains a c = 1 or c = 0. propose to retrieve the requested probabilities from music theory and to avoid the need for a labeled training database. Lerdahl (2001) has proposed a three-dimensional representation of the tonal space and a scheme for quantizing the perceptual differences between chords as well as keys. Lerdahl distinguishes five note levels, namely the chromatic, diatonic, triadic, fifth and tonic levels and he accumulates the differences observed at all these levels in a distance metric. If we can assume that in the case of a key modulation the probability of k n is dominated by the distance d(k n, k n 1 ) emerging from Lerdahl s theory, then we can propose the following model: P (k n k n 1, c n 1 ) = P os k n = k n 1 (10) = β s e d(kn,k n 1 ) ds k n k n 1 (11) with β s being a normalization factor and d s = 15, the mean distance between keys. By changing P os we can control the chance of hypothesizing a key modulation. 4.3 Modeling the chord transition probabilities For computing these probabilities we rely on the distances between diatonic chords (= chords solely composed of notes that fit into the key) as they follow from Lerdahl s theory, and on the tonicity of the chord. Reserving some probability mass for transitions to non-diatonic chords we obtain P (c n c n 1, k n, k n 1 ) = P oc c n = non-diatonic in k n (12) = β c e d(c n,c n 1 ) dc g(c n, k n ) c n = diatonic in k n (13) as a model. β c is a normalizaton factor, d c = 6 (the mean distance between chord vectors) and g(c n, k n ) is a model that favors chords comprising the key tonic (g = 1.5) or fifth (g = 1.25) over others (g = 1). By changing P oc we can control the chance of hypothesizing a non-diatonic chord.
6 6 Benoit Catteau, Jean-Pierre Martens, and Marc Leman 5 Experimental results For parameter tuning and system evaluation we have used four databases. Cadences. A set of 144 files: 3 classical cadences times 24 keys (12 major and 12 minor keys) times 2 synthesis methods (Shepard tones and MIDI-to-wave). Modulations. A set of 20 files: 10 chord sequences of length 9 (copied from Krumhansl and Kessler (1982)) times 2 synthesis methods. All sequences start in C major or C minor and on music theoretical grounds a unique key can be assigned to each chord. Eight sequences show a key modulation at position 5, the other two do not, but they explore chords on various degrees. Real audio. A set of 10 polyphonic audio fragments (60 seconds) from 10 different songs (see Table 1). Each fragment was chord and key labeled. MIREX. A set of 96 MIDI-to-wave synthesized fragments: compiled as a training database for the systems participating in the MIREX-2005 key detection contest. Each fragment was supplied with one key label. In case of modulation it is supposed to represent the dominant key for that fragment. Artist Title Key 1 CCR Proud Mary D Major 2 CCR Who ll stop the rain G Major 3 CCR Bad moon rising D Major 4 America Horse with no name E Minor 5 Dolly Parton Jolene Cs Minor 6 Toto Cutugno L Italiano A Minor 7 Iggy Pop The passenger A Minor 8 Marco Borsato Dromen zijn bedrog C Minor 9 Live I Alone Gb Major Eb Major 10 Ian McCulloch Sliding C Major Table 1. The test songs and their key 5.1 Free parameter tuning In order to tune the free parameters (P o, P os, P oc ) we worked on all the cadences and modulation sequences and one song from the real audio database. Since P os and P oc were anticipated to be the most critical parameters we explored them first in combination with P o = 0.1. There is a reasonably large area in the (P os, P oc )-plane where the performances on all the tuning data are good and stable (0.3 < P os < 0.5 and 0 P oc < 0.2). We have chosen for P os = 0.4 and P oc = 0.15 to get a fair chance of selecting key modulations and non-diatonic chords when present in the audio. For these values we got 100%, 96.7% and 92.1% of correct key labels for the cadences, the modulation sequences and the song. The corresponding correct chord label percentages were 100%, 93.8% and 73.7%. Changing P o did not cause any further improvement.
7 A probabilistic framework for audio-based tonal key and chord recognition System evaluation Real audio. For real audio we have measured the percentages of deleted reference chords (D), inserted chords (I), frames with the correct key label (C k ) and frames with the correct chord label (C c ). We obtained D = 4.3%, I = 82%, C k = 51.2% and C c = 75.7%. An illustration of the reference and computed labels for song 1 is shown on Figure 2. A E B Fs Cs Gs Ds As FC Reference A E B Fs Cs Gs Ds As FC Computed result G D G D A E B Fs Cs Gs Ds As FC A E B Fs Cs Gs Ds As FC G D G D Fig. 2. Annotated (left) and computed (right) chords (top) and keys (bottom) for song 1. The grey zones refer to major and the black ones to minor labels. A first observation is that our system produces a lot of chord insertions. This must be investigated in more detail, but possibly the annotator discarded some of the short chord changes. A second observation is that the key accuracy is rather low. However, a closer analysis showed that more than 60% of the key errors were confusions between a minor and its relative major. Another 15% were confusions between keys whose tonics differ by a fifth. By applying a weighted error measure as recommended by MIREX (weights of 0.3 for minor to relative major, 0.5 for a tonic difference of a fifth, and 1 otherwise) we obtain a key accuracy of 75.5%. Our chord recognition results seem to be very good. Without chord enhancement on the basis of high-level musical knowledge (this knowledge can also be applied on our system outputs) Shenoy and Wang (2005) report a chord accuracy of 48%. Although there are differences in the data set, the assumptions made by the system (e.g. fixed key) and the evaluation procedure, we believe that the above figure supports our claim that simultaneous chord and key labeling can outperform a cascaded approach. MIREX data. Since we did not participate in the MIREX contest, we only had access to the MIREX training set and not to the evaluation set. However
8 8 Benoit Catteau, Jean-Pierre Martens, and Marc Leman since we did not perform any parameter tuning on this set, we believe that the results of our system on the MIREX training set are representative of thoses we would be able to attain on the MIREX evaluation set. Using the recommended MIREX evaluation approach we obtained a key accuracy of 83%. The best result reported in the MIREX contest İzmirli (2005) was 89.5%. We hope that by further refining our models we will soon be able to bridge the gap with that performance. 6 Summary and conclusion We have proposed a segment-based probabilistic framework for the simultaneous recognition of chords and keys. The framework incorporates a novel observation likelihood model and key & chord transition models that were not trained but derived from the tonal space theory of Lerdahl. Our system was evaluated on real audio fragments and on MIDI-to-wave synthesized chord sequences (MIREX-2005 contest data). Apparently, real audio is hard to process correctly, but nevertheless our system does appear to outperform its counterparts in advanced chord labeling systems that have recently been developed by others. The key labeling results for the MIREX data are also very good and already close to the best results previously reported for these data. References BELLO JP, PICKENS J (2005): A robust mid-level representation for harmonic content in music signals. In Procs 6th Int. Conference on Music Information Retrieval (ISMIR 2005). London, İZMIRLI, Ö (2005): Tonal similarity from audio using a template based attractor model. In Procs 6th Int. Conference on Music Information Retrieval (ISMIR 2005). London, KRUMHANSL C, KESSLER E (1982): Tracing the Dynamic Changes in Perceived Tonal Organization in a Spatial Representation of Musical Keys. Psychological Review, 89, LEMAN M (2000): An auditory model of the role of short-term memory in probetone ratings. Music Perception 17, LERDAHL F (2001): Tonal Pitch Space. Oxford University Press, New York. PAUWS S (2004): Musical key Extraction from Audio. In Procs 5th Int. Conference on Music Information Retrieval (ISMIR 2004). Barcelona, SHENOY A, WANG Y (2005): Key, chord, and Rhythm Tracking of Popular Music Recordings. Computer Music Journal, 29(3), TEMPERLEY D (1999): What s Key for Key? the Krumhansl-Schmuckler Key- Finding Algorithm Reconsidered. Music Perception, 17(1), TERHARDT E., STOLL G. AND SEEWANN M. (1982): Algorithm for extraction of pitch and pitch salience for complex tonal signals. In J. Acoust. Soc. Am., 71,
EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function
EE391 Special Report (Spring 25) Automatic Chord Recognition Using A Summary Autocorrelation Function Advisor: Professor Julius Smith Kyogu Lee Center for Computer Research in Music and Acoustics (CCRMA)
More informationHomework 2 Key-finding algorithm
Homework 2 Key-finding algorithm Li Su Research Center for IT Innovation, Academia, Taiwan lisu@citi.sinica.edu.tw (You don t need any solid understanding about the musical key before doing this homework,
More information10 Visualization of Tonal Content in the Symbolic and Audio Domains
10 Visualization of Tonal Content in the Symbolic and Audio Domains Petri Toiviainen Department of Music PO Box 35 (M) 40014 University of Jyväskylä Finland ptoiviai@campus.jyu.fi Abstract Various computational
More informationA System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models
A System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models Kyogu Lee Center for Computer Research in Music and Acoustics Stanford University, Stanford CA 94305, USA
More informationNotes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue
Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue I. Intro A. Key is an essential aspect of Western music. 1. Key provides the
More informationInstrument Recognition in Polyphonic Mixtures Using Spectral Envelopes
Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu
More informationTranscription of the Singing Melody in Polyphonic Music
Transcription of the Singing Melody in Polyphonic Music Matti Ryynänen and Anssi Klapuri Institute of Signal Processing, Tampere University Of Technology P.O.Box 553, FI-33101 Tampere, Finland {matti.ryynanen,
More informationOutline. Why do we classify? Audio Classification
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify
More informationMusic Radar: A Web-based Query by Humming System
Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN 47907-2107 {cao62, pengh,
More informationMusic Segmentation Using Markov Chain Methods
Music Segmentation Using Markov Chain Methods Paul Finkelstein March 8, 2011 Abstract This paper will present just how far the use of Markov Chains has spread in the 21 st century. We will explain some
More informationDetecting Musical Key with Supervised Learning
Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different
More informationTopic 10. Multi-pitch Analysis
Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds
More informationHarmony and tonality The vertical dimension. HST 725 Lecture 11 Music Perception & Cognition
Harvard-MIT Division of Health Sciences and Technology HST.725: Music Perception and Cognition Prof. Peter Cariani Harmony and tonality The vertical dimension HST 725 Lecture 11 Music Perception & Cognition
More informationCharacteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals
Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals Eita Nakamura and Shinji Takaki National Institute of Informatics, Tokyo 101-8430, Japan eita.nakamura@gmail.com, takaki@nii.ac.jp
More informationMultiple instrument tracking based on reconstruction error, pitch continuity and instrument activity
Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity Holger Kirchhoff 1, Simon Dixon 1, and Anssi Klapuri 2 1 Centre for Digital Music, Queen Mary University
More informationStatistical Modeling and Retrieval of Polyphonic Music
Statistical Modeling and Retrieval of Polyphonic Music Erdem Unal Panayiotis G. Georgiou and Shrikanth S. Narayanan Speech Analysis and Interpretation Laboratory University of Southern California Los Angeles,
More informationMUSIC CONTENT ANALYSIS : KEY, CHORD AND RHYTHM TRACKING IN ACOUSTIC SIGNALS
MUSIC CONTENT ANALYSIS : KEY, CHORD AND RHYTHM TRACKING IN ACOUSTIC SIGNALS ARUN SHENOY KOTA (B.Eng.(Computer Science), Mangalore University, India) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE
More informationAudio Feature Extraction for Corpus Analysis
Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends
More informationProbabilist modeling of musical chord sequences for music analysis
Probabilist modeling of musical chord sequences for music analysis Christophe Hauser January 29, 2009 1 INTRODUCTION Computer and network technologies have improved consequently over the last years. Technology
More informationHST 725 Music Perception & Cognition Assignment #1 =================================================================
HST.725 Music Perception and Cognition, Spring 2009 Harvard-MIT Division of Health Sciences and Technology Course Director: Dr. Peter Cariani HST 725 Music Perception & Cognition Assignment #1 =================================================================
More informationTREE MODEL OF SYMBOLIC MUSIC FOR TONALITY GUESSING
( Φ ( Ψ ( Φ ( TREE MODEL OF SYMBOLIC MUSIC FOR TONALITY GUESSING David Rizo, JoséM.Iñesta, Pedro J. Ponce de León Dept. Lenguajes y Sistemas Informáticos Universidad de Alicante, E-31 Alicante, Spain drizo,inesta,pierre@dlsi.ua.es
More informationAugmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series
-1- Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series JERICA OBLAK, Ph. D. Composer/Music Theorist 1382 1 st Ave. New York, NY 10021 USA Abstract: - The proportional
More informationAutomatic Piano Music Transcription
Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening
More informationWeek 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University
Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based
More informationA PROBABILISTIC TOPIC MODEL FOR UNSUPERVISED LEARNING OF MUSICAL KEY-PROFILES
A PROBABILISTIC TOPIC MODEL FOR UNSUPERVISED LEARNING OF MUSICAL KEY-PROFILES Diane J. Hu and Lawrence K. Saul Department of Computer Science and Engineering University of California, San Diego {dhu,saul}@cs.ucsd.edu
More informationComputational Modelling of Harmony
Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond
More informationChord Classification of an Audio Signal using Artificial Neural Network
Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationAutomatic Rhythmic Notation from Single Voice Audio Sources
Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung
More informationA PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES
12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou
More informationA repetition-based framework for lyric alignment in popular songs
A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine
More informationCHAPTER 3. Melody Style Mining
CHAPTER 3 Melody Style Mining 3.1 Rationale Three issues need to be considered for melody mining and classification. One is the feature extraction of melody. Another is the representation of the extracted
More informationA geometrical distance measure for determining the similarity of musical harmony. W. Bas de Haas, Frans Wiering & Remco C.
A geometrical distance measure for determining the similarity of musical harmony W. Bas de Haas, Frans Wiering & Remco C. Veltkamp International Journal of Multimedia Information Retrieval ISSN 2192-6611
More informationA STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS
A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS Mutian Fu 1 Guangyu Xia 2 Roger Dannenberg 2 Larry Wasserman 2 1 School of Music, Carnegie Mellon University, USA 2 School of Computer
More informationWeek 14 Music Understanding and Classification
Week 14 Music Understanding and Classification Roger B. Dannenberg Professor of Computer Science, Music & Art Overview n Music Style Classification n What s a classifier? n Naïve Bayesian Classifiers n
More informationPitch Spelling Algorithms
Pitch Spelling Algorithms David Meredith Centre for Computational Creativity Department of Computing City University, London dave@titanmusic.com www.titanmusic.com MaMuX Seminar IRCAM, Centre G. Pompidou,
More informationComposer Identification of Digital Audio Modeling Content Specific Features Through Markov Models
Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Aric Bartle (abartle@stanford.edu) December 14, 2012 1 Background The field of composer recognition has
More informationTonal Cognition INTRODUCTION
Tonal Cognition CAROL L. KRUMHANSL AND PETRI TOIVIAINEN Department of Psychology, Cornell University, Ithaca, New York 14853, USA Department of Music, University of Jyväskylä, Jyväskylä, Finland ABSTRACT:
More informationMUSI-6201 Computational Music Analysis
MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)
More informationTake a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University
Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You Chris Lewis Stanford University cmslewis@stanford.edu Abstract In this project, I explore the effectiveness of the Naive Bayes Classifier
More informationMusic Alignment and Applications. Introduction
Music Alignment and Applications Roger B. Dannenberg Schools of Computer Science, Art, and Music Introduction Music information comes in many forms Digital Audio Multi-track Audio Music Notation MIDI Structured
More informationDETECTION OF KEY CHANGE IN CLASSICAL PIANO MUSIC
i i DETECTION OF KEY CHANGE IN CLASSICAL PIANO MUSIC Wei Chai Barry Vercoe MIT Media Laoratory Camridge MA, USA {chaiwei, v}@media.mit.edu ABSTRACT Tonality is an important aspect of musical structure.
More informationTOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC
TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu
More informationRecognition and Summarization of Chord Progressions and Their Application to Music Information Retrieval
Recognition and Summarization of Chord Progressions and Their Application to Music Information Retrieval Yi Yu, Roger Zimmermann, Ye Wang School of Computing National University of Singapore Singapore
More informationNOTE-LEVEL MUSIC TRANSCRIPTION BY MAXIMUM LIKELIHOOD SAMPLING
NOTE-LEVEL MUSIC TRANSCRIPTION BY MAXIMUM LIKELIHOOD SAMPLING Zhiyao Duan University of Rochester Dept. Electrical and Computer Engineering zhiyao.duan@rochester.edu David Temperley University of Rochester
More informationAN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS
AN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS Rui Pedro Paiva CISUC Centre for Informatics and Systems of the University of Coimbra Department
More information2 The Tonal Properties of Pitch-Class Sets: Tonal Implication, Tonal Ambiguity, and Tonalness
2 The Tonal Properties of Pitch-Class Sets: Tonal Implication, Tonal Ambiguity, and Tonalness David Temperley Eastman School of Music 26 Gibbs St. Rochester, NY 14604 dtemperley@esm.rochester.edu Abstract
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationTopic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller)
Topic 11 Score-Informed Source Separation (chroma slides adapted from Meinard Mueller) Why Score-informed Source Separation? Audio source separation is useful Music transcription, remixing, search Non-satisfying
More informationOBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES
OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,
More informationSemantic Segmentation and Summarization of Music
[ Wei Chai ] DIGITALVISION, ARTVILLE (CAMERAS, TV, AND CASSETTE TAPE) STOCKBYTE (KEYBOARD) Semantic Segmentation and Summarization of Music [Methods based on tonality and recurrent structure] Listening
More informationPOST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS
POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music
More informationTHE importance of music content analysis for musical
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2007 333 Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With
More informationChord Representations for Probabilistic Models
R E S E A R C H R E P O R T I D I A P Chord Representations for Probabilistic Models Jean-François Paiement a Douglas Eck b Samy Bengio a IDIAP RR 05-58 September 2005 soumis à publication a b IDIAP Research
More informationQuery By Humming: Finding Songs in a Polyphonic Database
Query By Humming: Finding Songs in a Polyphonic Database John Duchi Computer Science Department Stanford University jduchi@stanford.edu Benjamin Phipps Computer Science Department Stanford University bphipps@stanford.edu
More informationMETHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING
Proceedings ICMC SMC 24 4-2 September 24, Athens, Greece METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING Kouhei Kanamori Masatoshi Hamanaka Junichi Hoshino
More informationEvaluating Melodic Encodings for Use in Cover Song Identification
Evaluating Melodic Encodings for Use in Cover Song Identification David D. Wickland wickland@uoguelph.ca David A. Calvert dcalvert@uoguelph.ca James Harley jharley@uoguelph.ca ABSTRACT Cover song identification
More informationFigured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky Paris France
Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky 75004 Paris France 33 01 44 78 48 43 jerome.barthelemy@ircam.fr Alain Bonardi Ircam 1 Place Igor Stravinsky 75004 Paris
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationA TEXT RETRIEVAL APPROACH TO CONTENT-BASED AUDIO RETRIEVAL
A TEXT RETRIEVAL APPROACH TO CONTENT-BASED AUDIO RETRIEVAL Matthew Riley University of Texas at Austin mriley@gmail.com Eric Heinen University of Texas at Austin eheinen@mail.utexas.edu Joydeep Ghosh University
More informationCPU Bach: An Automatic Chorale Harmonization System
CPU Bach: An Automatic Chorale Harmonization System Matt Hanlon mhanlon@fas Tim Ledlie ledlie@fas January 15, 2002 Abstract We present an automated system for the harmonization of fourpart chorales in
More informationMusic Information Retrieval with Temporal Features and Timbre
Music Information Retrieval with Temporal Features and Timbre Angelina A. Tzacheva and Keith J. Bell University of South Carolina Upstate, Department of Informatics 800 University Way, Spartanburg, SC
More informationA CHROMA-BASED SALIENCE FUNCTION FOR MELODY AND BASS LINE ESTIMATION FROM MUSIC AUDIO SIGNALS
A CHROMA-BASED SALIENCE FUNCTION FOR MELODY AND BASS LINE ESTIMATION FROM MUSIC AUDIO SIGNALS Justin Salamon Music Technology Group Universitat Pompeu Fabra, Barcelona, Spain justin.salamon@upf.edu Emilia
More informationSinger Traits Identification using Deep Neural Network
Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic
More informationAutomatic Key Detection of Musical Excerpts from Audio
Automatic Key Detection of Musical Excerpts from Audio Spencer Campbell Music Technology Area, Department of Music Research Schulich School of Music McGill University Montreal, Canada August 2010 A thesis
More informationDETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION
DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION H. Pan P. van Beek M. I. Sezan Electrical & Computer Engineering University of Illinois Urbana, IL 6182 Sharp Laboratories
More informationMusic Emotion Recognition. Jaesung Lee. Chung-Ang University
Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or
More informationDrum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods
Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National
More informationSubjective Similarity of Music: Data Collection for Individuality Analysis
Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp
More informationDAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval
DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca
More informationPerceptual Evaluation of Automatically Extracted Musical Motives
Perceptual Evaluation of Automatically Extracted Musical Motives Oriol Nieto 1, Morwaread M. Farbood 2 Dept. of Music and Performing Arts Professions, New York University, USA 1 oriol@nyu.edu, 2 mfarbood@nyu.edu
More informationBayesian Model Selection for Harmonic Labelling
Bayesian Model Selection for Harmonic Labelling Christophe Rhodes, David Lewis, Daniel Müllensiefen Department of Computing Goldsmiths, University of London SE14 6NW, United Kingdom April 29, 2008 Abstract
More informationCreating a Feature Vector to Identify Similarity between MIDI Files
Creating a Feature Vector to Identify Similarity between MIDI Files Joseph Stroud 2017 Honors Thesis Advised by Sergio Alvarez Computer Science Department, Boston College 1 Abstract Today there are many
More informationInfluence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas
Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination
More informationEffects of acoustic degradations on cover song recognition
Signal Processing in Acoustics: Paper 68 Effects of acoustic degradations on cover song recognition Julien Osmalskyj (a), Jean-Jacques Embrechts (b) (a) University of Liège, Belgium, josmalsky@ulg.ac.be
More informationA MULTI-PARAMETRIC AND REDUNDANCY-FILTERING APPROACH TO PATTERN IDENTIFICATION
A MULTI-PARAMETRIC AND REDUNDANCY-FILTERING APPROACH TO PATTERN IDENTIFICATION Olivier Lartillot University of Jyväskylä Department of Music PL 35(A) 40014 University of Jyväskylä, Finland ABSTRACT This
More informationThe MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval
The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval IPEM, Dept. of musicology, Ghent University, Belgium Outline About the MAMI project Aim of the
More informationA CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION
A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION Graham E. Poliner and Daniel P.W. Ellis LabROSA, Dept. of Electrical Engineering Columbia University, New York NY 127 USA {graham,dpwe}@ee.columbia.edu
More informationAutomatic characterization of ornamentation from bassoon recordings for expressive synthesis
Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Montserrat Puiggròs, Emilia Gómez, Rafael Ramírez, Xavier Serra Music technology Group Universitat Pompeu Fabra
More informationClassification of Timbre Similarity
Classification of Timbre Similarity Corey Kereliuk McGill University March 15, 2007 1 / 16 1 Definition of Timbre What Timbre is Not What Timbre is A 2-dimensional Timbre Space 2 3 Considerations Common
More informationAP MUSIC THEORY 2016 SCORING GUIDELINES
2016 SCORING GUIDELINES Question 7 0---9 points A. ARRIVING AT A SCORE FOR THE ENTIRE QUESTION 1. Score each phrase separately and then add the phrase scores together to arrive at a preliminary tally for
More informationAutomatic Laughter Detection
Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional
More informationSequential Association Rules in Atonal Music
Sequential Association Rules in Atonal Music Aline Honingh, Tillman Weyde and Darrell Conklin Music Informatics research group Department of Computing City University London Abstract. This paper describes
More informationA System for Acoustic Chord Transcription and Key Extraction from Audio Using Hidden Markov models Trained on Synthesized Audio
Curriculum Vitae Kyogu Lee Advanced Technology Center, Gracenote Inc. 2000 Powell Street, Suite 1380 Emeryville, CA 94608 USA Tel) 1-510-428-7296 Fax) 1-510-547-9681 klee@gracenote.com kglee@ccrma.stanford.edu
More informationA Geometrical Distance Measure for Determining the Similarity of Musical Harmony
A Geometrical Distance Measure for Determining the Similarity of Musical Harmony W. Bas De Haas Frans Wiering and Remco C. Veltkamp Technical Report UU-CS-2011-015 May 2011 Department of Information and
More informationCSC475 Music Information Retrieval
CSC475 Music Information Retrieval Symbolic Music Representations George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 30 Table of Contents I 1 Western Common Music Notation 2 Digital Formats
More informationINTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION
INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for
More informationAn Integrated Music Chromaticism Model
An Integrated Music Chromaticism Model DIONYSIOS POLITIS and DIMITRIOS MARGOUNAKIS Dept. of Informatics, School of Sciences Aristotle University of Thessaloniki University Campus, Thessaloniki, GR-541
More informationMelody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng
Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the
More informationIMPROVING GENRE CLASSIFICATION BY COMBINATION OF AUDIO AND SYMBOLIC DESCRIPTORS USING A TRANSCRIPTION SYSTEM
IMPROVING GENRE CLASSIFICATION BY COMBINATION OF AUDIO AND SYMBOLIC DESCRIPTORS USING A TRANSCRIPTION SYSTEM Thomas Lidy, Andreas Rauber Vienna University of Technology, Austria Department of Software
More informationMeasurement of overtone frequencies of a toy piano and perception of its pitch
Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,
More informationPiano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15
Piano Transcription MUMT611 Presentation III 1 March, 2007 Hankinson, 1/15 Outline Introduction Techniques Comb Filtering & Autocorrelation HMMs Blackboard Systems & Fuzzy Logic Neural Networks Examples
More informationSINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION
th International Society for Music Information Retrieval Conference (ISMIR ) SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION Chao-Ling Hsu Jyh-Shing Roger Jang
More informationIntroductions to Music Information Retrieval
Introductions to Music Information Retrieval ECE 272/472 Audio Signal Processing Bochen Li University of Rochester Wish List For music learners/performers While I play the piano, turn the page for me Tell
More informationUnsupervised Bayesian Musical Key and Chord Recognition
Unsupervised Bayesian Musical Key and Chord Recognition A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at George Mason University by Yun-Sheng
More informationAUDIO-BASED COVER SONG RETRIEVAL USING APPROXIMATE CHORD SEQUENCES: TESTING SHIFTS, GAPS, SWAPS AND BEATS
AUDIO-BASED COVER SONG RETRIEVAL USING APPROXIMATE CHORD SEQUENCES: TESTING SHIFTS, GAPS, SWAPS AND BEATS Juan Pablo Bello Music Technology, New York University jpbello@nyu.edu ABSTRACT This paper presents
More informationChord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations
Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations Hendrik Vincent Koops 1, W. Bas de Haas 2, Jeroen Bransen 2, and Anja Volk 1 arxiv:1706.09552v1 [cs.sd]
More informationAP MUSIC THEORY 2011 SCORING GUIDELINES
2011 SCORING GUIDELINES Question 7 SCORING: 9 points A. ARRIVING AT A SCORE FOR THE ENTIRE QUESTION 1. Score each phrase separately and then add these phrase scores together to arrive at a preliminary
More informationA Novel System for Music Learning using Low Complexity Algorithms
International Journal of Applied Information Systems (IJAIS) ISSN : 9-0868 Volume 6 No., September 013 www.ijais.org A Novel System for Music Learning using Low Complexity Algorithms Amr Hesham Faculty
More informationAP MUSIC THEORY 2015 SCORING GUIDELINES
2015 SCORING GUIDELINES Question 7 0 9 points A. ARRIVING AT A SCORE FOR THE ENTIRE QUESTION 1. Score each phrase separately and then add the phrase scores together to arrive at a preliminary tally for
More informationA Psychoacoustically Motivated Technique for the Automatic Transcription of Chords from Musical Audio
A Psychoacoustically Motivated Technique for the Automatic Transcription of Chords from Musical Audio Daniel Throssell School of Electrical, Electronic & Computer Engineering The University of Western
More information