Meter and Autocorrelation

Similar documents
Finding Meter in Music Using an Autocorrelation Phase Matrix and Shannon Entropy

Autocorrelation in meter induction: The role of accent structure a)

CLASSIFICATION OF MUSICAL METRE WITH AUTOCORRELATION AND DISCRIMINANT FUNCTIONS

An Empirical Comparison of Tempo Trackers

Tempo and Beat Analysis

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach

Interacting with a Virtual Conductor

Classification of Dance Music by Periodicity Patterns

Automatic music transcription

TOWARDS CHARACTERISATION OF MUSIC VIA RHYTHMIC PATTERNS

Human Preferences for Tempo Smoothness

Analysis of local and global timing and pitch change in ordinary

Hidden Markov Model based dance recognition

A Beat Tracking System for Audio Signals

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

Robert Alexandru Dobre, Cristian Negrescu

Machine Learning Term Project Write-up Creating Models of Performers of Chopin Mazurkas

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Transcription of the Singing Melody in Polyphonic Music

Automatic Rhythmic Notation from Single Voice Audio Sources

THE importance of music content analysis for musical

Meter Detection in Symbolic Music Using a Lexicalized PCFG

Computer Coordination With Popular Music: A New Research Agenda 1

CS229 Project Report Polyphonic Piano Transcription

Beat Tracking based on Multiple-agent Architecture A Real-time Beat Tracking System for Audio Signals

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT

ISMIR 2006 TUTORIAL: Computational Rhythm Description

Feature-Based Analysis of Haydn String Quartets

A MID-LEVEL REPRESENTATION FOR CAPTURING DOMINANT TEMPO AND PULSE INFORMATION IN MUSIC RECORDINGS

TRACKING THE ODD : METER INFERENCE IN A CULTURALLY DIVERSE MUSIC CORPUS

Acoustic and musical foundations of the speech/song illusion

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

Supervised Learning in Genre Classification

Learning Musical Structure Directly from Sequences of Music

Music Radar: A Web-based Query by Humming System

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

Tempo and Beat Tracking

Analysis and Clustering of Musical Compositions using Melody-based Features

Modeling the Effect of Meter in Rhythmic Categorization: Preliminary Results

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies

MODELING RHYTHM SIMILARITY FOR ELECTRONIC DANCE MUSIC

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Query By Humming: Finding Songs in a Polyphonic Database

The Generation of Metric Hierarchies using Inner Metric Analysis

Audio Feature Extraction for Corpus Analysis

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

BEAT AND METER EXTRACTION USING GAUSSIFIED ONSETS

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES

How to Obtain a Good Stereo Sound Stage in Cars

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Perceiving temporal regularity in music

MODELING MUSICAL RHYTHM AT SCALE WITH THE MUSIC GENOME PROJECT Chestnut St Webster Street Philadelphia, PA Oakland, CA 94612

CALCULATING SIMILARITY OF FOLK SONG VARIANTS WITH MELODY-BASED FEATURES

IMPROVING RHYTHMIC SIMILARITY COMPUTATION BY BEAT HISTOGRAM TRANSFORMATIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Topic 10. Multi-pitch Analysis

Evaluation of the Audio Beat Tracking System BeatRoot

METRICAL STRENGTH AND CONTRADICTION IN TURKISH MAKAM MUSIC

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Measurement of overtone frequencies of a toy piano and perception of its pitch

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

2. AN INTROSPECTION OF THE MORPHING PROCESS

LESSON 1 PITCH NOTATION AND INTERVALS

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Music Composition with RNN

Chord Classification of an Audio Signal using Artificial Neural Network

ESTIMATING THE ERROR DISTRIBUTION OF A TAP SEQUENCE WITHOUT GROUND TRUTH 1

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

A Bayesian Network for Real-Time Musical Accompaniment

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION

MUSI-6201 Computational Music Analysis

Week 14 Music Understanding and Classification

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

Tempo Estimation and Manipulation

TEMPO AND BEAT are well-defined concepts in the PERCEPTUAL SMOOTHNESS OF TEMPO IN EXPRESSIVELY PERFORMED MUSIC

Detecting Musical Key with Supervised Learning

Extracting Significant Patterns from Musical Strings: Some Interesting Problems.

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Beat Tracking by Dynamic Programming

FULL-AUTOMATIC DJ MIXING SYSTEM WITH OPTIMAL TEMPO ADJUSTMENT BASED ON MEASUREMENT FUNCTION OF USER DISCOMFORT

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

PULSE-DEPENDENT ANALYSES OF PERCUSSIVE MUSIC

CSC475 Music Information Retrieval

Evaluation of Audio Beat Tracking and Music Tempo Extraction Algorithms

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

arxiv: v1 [cs.sd] 8 Jun 2016

Music Genre Classification and Variance Comparison on Number of Genres

TERRESTRIAL broadcasting of digital television (DTV)

Music Information Retrieval with Temporal Features and Timbre

Subjective Similarity of Music: Data Collection for Individuality Analysis

Automatic meter extraction from MIDI files (Extraction automatique de mètres à partir de fichiers MIDI)

Evaluation of the Audio Beat Tracking System BeatRoot

A Framework for Segmentation of Interview Videos

JOINT BEAT AND DOWNBEAT TRACKING WITH RECURRENT NEURAL NETWORKS

Hugo Technology. An introduction into Rob Watts' technology

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

Temporal coordination in string quartet performance

BEAT CRITIC: BEAT TRACKING OCTAVE ERROR IDENTIFICATION BY METRICAL PROFILE ANALYSIS

Transcription:

Meter and Autocorrelation Douglas Eck University of Montreal Department of Computer Science CP 6128, Succ. Centre-Ville Montreal, Quebec H3C 3J7 CANADA eckdoug@iro.umontreal.ca Abstract This paper introduces a novel way to detect metrical structure in music. We introduce a way to compute autocorrelation such that the distribution of energy in phase space is preserved in a matrix. The resulting autocorrelation phase matrix is useful for several tasks involving metrical structure. First we can use the matrix to enhance standard autocorrelation by calculating the Shannon entropy at each lag. This approach yields improved results for autocorrelation-based tempo induction. Second, we can efficiently search the matrix for combinations of lags that suggest particular metrical hierarchies. This approach yields a good model for predicting the meter of a piece of music. Finally we can use the phase information in the matrix to align a candidate meter with music, making it possible to perform beat induction with an autocorrelation-based model. We argue that the autocorrelation phase matrix is a good, relatively efficient representation of temporal structure that is useful for a variety of applications. We present results for several relatively large meter prediction and tempo induction datasets, demonstrating that the approach is competitive with models designed specifically for these tasks. We also present preliminary beat induction results on a small set of artificial patterns. Presented at Rhythm Perception Production Workshop (RPPW) 2005. Draft. Do Not Cite. 1

1 Introduction In this paper we introduce an autocorrelation phase matrix, a two-dimensional structure (computed from MIDI or digital audio) that provides the necessary information for estimating the lags and phases of the music s metrical hierarchy. We use this matrix as the core data structure to estimate the meter of a piece (meter prediction), to estimate the tempo of a piece (tempo induction) and to align the piece of music with the predicted metrical structure (beat induction). We will provide algorithm details and experimental results for meter prediction and tempo induction. We will also present some details concerning the alignment of the metrical structure with a piece of music. We will also present alignment results for a small dataset of artificial patterns. However the details of computing this alignment online (for beat induction) are the topic of another paper. The structure of this paper is as follows. In Section 2 we will discuss other approaches to finding meter and beat in music. In Section 3 we will describe our model consisting of the creation of an autocorrelation matrix, computation of the entropy for each lag in this matrix, the selection of a metrical hierarchy and the alignment of the hierarchy with music. Finally in Section 4 we present simulation results. 2 Meter and Autocorrelation Meter is the sense of strong and weak beats that arises from the interaction among hierarchical levels of sequences having nested periodic components. Such a hierarchy is implied in Western music notation, where different levels are indicated by kinds of notes (whole notes, half notes, quarter notes, etc.) and where bars establish measures of an equal number of beats (Handel, 1993). For instance, most contemporary pop songs are built on four-beat meters. In such songs, the first and third beats are usually emphasized. Knowing the meter of a piece of music helps in predicting other components of musical structure such as the location of chord changes and repetition boundaries (Cooper and Meyer, 1960). Autocorrelation works by transforming a signal from the time domain into the frequency domain. Autocorrelation provides a high-resolution picture of the relative salience of different periodicities, thus motivating its use 2

in tempo and meter related music tasks. However, the autocorrelation transform discards all phase information, making it impossible to align salient periodicities with the music. Thus autocorrelation can be used to predict, for example, that music has something that repeats every 1000ms but it cannot say when the repetition takes place relative to the start of the music. One primary goal of our work here is to compute autocorrelation efficiently while at the same time preserving the phase information necessary to perform such an alignment. Our solution is the autocorrelation phase matrix. Autocorrelation is certainly not the only way to perform meter prediction and related tasks like tempo induction. Adaptive oscillator models (Large and Kolen, 1994; Eck, 2002) can be thought of as a time-domain correlate to autocorrelation based methods and have shown promise, especially in cognitive modeling. Multi-agent systems such as those by Dixon (2001) have been applied with success. as have Monte-Carlo sampling (Cemgil and Kappen, 2003) and Kalman filtering methods (Cemgil et al., 2001). However, due to space constraints we will omit details of these approaches and focus here solely on autocorrelation methods. Brown (1993) used autocorrelation to find meter in musical scores represented as note onsets weighted by their duration. The durational accent she used is applicable for musical score analysis but is impractical for digital audio due to difficulties in computing note durations. However it was one of the first reported uses of autocorrelation for meter prediction. Brown reported that the model was able to provide a reliable estimate of meter using relatively little computational power. Vos et al. (1994) proposed a similar autocorrelation method. The primary difference between their work and that of Brown was their use of melodic intervals in computing accents. They applied their model to compositions by Bach, demonstrating the usefulness of melodic accent in detecting meter in these examples. Scheirer (1998) provided a model of beat tracking that treats audio files directly and performs relatively well over a wide range of musical styles (41 correct of 60 examples). Though he does not use autocorrelation he uses related comb filtering techniques elements such as note onset times. His model required extensive multi-band preprocessing. He filtered an audio signal into several bands and then downsampled, differentiated and rectified each band. He then passed these signals into a bank of 150 comb filters, selecting the maximum output to recover the tempo and phase. Tempo changes were handled by repeatedly changing the choice of filter. 3

Volk (2004) explored the influence of interactions between levels in the metrical hierarchy on metrical accenting. Her model compared a metric interpretation gained by analyzing note onsets to an interpretation gained by analyzing the time signature of the musical score. Her method included the computation of a weighted score of a particular candidate meter as extended through the entire piece of music. Toiviainen and Eerola (2004) also investigated an autocorrelation-based meter induction model. Their focus was on the relative usefulness of durational accent and melodic accent in predicting meter. The authors observed that durational and melodic accents provide a modest boost in performance when used in conjunction with unaccented data, but that unaccented data was the most useful single factor for successful meter classification. Central to their model was stepwise discriminant function analysis, a powerful tool for analyzing data. However as this method does not control against overfitting, further tests are necessary to know how well the model will generalize. Klapuri et al. (2005) incorporate the signal processing approaches of Goto (2001) and Scheierer in a model that analyzes the period and phase of three levels of the metrical hierarchy: the fastest-changing level or tatum the most prominent level or tactus (usually the same level as the foot tapping rate), and the level at which musical measures are grouped. A probabilistic model aided by hand-encoded musical prior knowledge is used for joint estimation of pulses at the three levels. A fixed three-level approach to meter may pose difficulties for processing rhythmically simple music, where the tactus and tatum may be identical (imagine a fugue consisting entirely of eighth notes). It may also pose difficulties for processing rhythmically complex music where there can exist multiple stable metrical levels separating the tatum and tactus. Despite this, the model in fact performs very well at tempo induction as seen in the ISMIR 2004 Tempo Induction contest (Gouyon et al., 2005). We return to this in Section 4. 3 Model Details We describe a model that uses autocorrelation as its core, but that takes advantage of the distribution of energy in phase space as a method to overcome weaknesses in standard autocorrelation. The model is described in Sections 3.1 through 3.6. 4

3.1 Preprocessing For MIDI files, the onsets can be transformed into spikes with amplitude proportional to their midi note onset volume. Alternately MIDI files can simply be rendered as audio and written to wave files. Stereo audio files are converted to mono by taking the mean of the two channels. Then files are downsampled to some rate near 1000Hz. The actual rate is kept variable because it depends on the original sampling rate. For CD-audio (44.1Khz), we used a sampling rate of 1050Hz allowing us to downsample by a factor of 42 from the original file. Best results were achieved by computing a sum-ofsquares envelope over windows of size 42 with 5 points of overlap. However for most audio sources a simple decimation and rectification works as well. The model was not very sensitive to changes in sampling rate nor to minor adjustments in the envelope computation such as substituting RMS (root mean square) for the sum of squares computation. One of our goals was to avoid complicated preprocessing, and we succeeded in doing so. However there is no reason that our model could not be adapted to work with multi-band filtering approaches as used by, e.g., Klapuri et al. (2005); Goto (2001). This did not seem necessary for our current experiments but it may be necessary for future work in online beat tracking. 3.2 Autocorrelation Phase Matrix The method of cross-correlation is commonly used to evaluate whether two signals exhibit common features and are therefore correlated (Ifeachor and Jervis, 1993). To perform cross-correlation one computes the sum of the products of corresponding pairs of two signals. A range of lags are considered, accounting for potential time delays between correlated information in the two signals. The formula for the lag k cross-correlation C k between signals x 1 and x 2 (having length N) is: C k (X 1, X 2 ) = 1 N 0<n<N k x 1 (n) x 2 (n + k) (1) Autocorrelation is a special case of cross-correlation where x 1 == x 2. There is a strong and somewhat surprising link between autocorrelation and the Fourier transform. Namely the autocorrelation A of a signal X (having length N) is: A(X) = if f t( f f t(x) ) (2) 5

where fft is the (fast) Fourier transform, ifft is the inverse (fast) Fourier transform and is the complex modulus. One advantage of autocorrelation for our purposes is that it is defined over periods rather than frequencies (note the application of the IFFT in Equation 2), yielding better representation of low-frequency information than is possible with the FFT. Autocorrelation values for a random signal should be roughly equal across lags. Spikes in an autocorrelation indicate temporal order in a signal, making it possible to use autocorrelation to find the periods at which high correlation exists in a signal. As a music example, consider the autocorrelation for a ChaChaCha from the ISMIR 2004 Tempo Induction contest is shown (Figure 1). The peaks of the autocorrelation align with the tempo and integer multiples of the tempo. Unfortunately autocorrelation has been shown in practice to not work well for many kinds of music. For example when a signal lacks strong onset energy, as it might for voice or smoothly changing musical instruments like strings, the autocorrelation tends to be flat. See for example a song from Manos Xatzidakis from the ISMIR 2004 Tempo Induction in Figure 2. Here the peaks are less sharp and are not well-aligned with the target tempo. Note that the y-axis scale of this graph is identical to that in Figure 1. One way to address this is to apply the autocorrelation to a number of band-pass filtered versions of the signal, as discussed in Section 3.1. In place of multi-band processing we compute the distribution of autocorrelation energy in phase space. This has a sharpening effect, allowing autocorrelation to be applied to a wider range of signals than autocorrelation alone without extensive preprocessing. The autocorrelation phase information for lag l is a vector A l : A l = N l l i=0 x li+φ x l(i+1)+φ l 1 We compute an autocorrelation phase vector A l for each lag of interest. In our case the minimum lag of interest was 200ms and the maximum lag of interest was 3999ms. Lags were sampled at 1ms intervals yielding L = 3800 lags. Equation 3 effectively wraps the signal modulo the lag l question, yielding vectors of differing lengths ( A l == l). To simplify later computations we normalized the length of all vectors by computing a histogram estimate. This was achieved by fixing the number of phase points for all φ=0 (3) 6

600 Albums Cafe_Paradiso 08.wav 500 400 autocorrelation 300 200 True lag 100 0 0 500 1000 1500 2000 2500 3000 3500 4000 lag (msec) Figure 1: Autocorrelation of a ChaChaCha from the ISMIR 2004 Tempo Induction contest (Albums-Cafe Paradiso-08.wav). The dotted vertical line marks the actual tempo of the song (484 msec, 124 bpm). lags at K (K = 50 for all simulations; larger values were tried and yielded similar results but significantly smaller values resulted in a loss of temporal resolution) and resampling the variable length vectors to this fixed length. This process yielded a rectangular autocorrelation phase matrix P where P = [L, K]. As an example of an autocorrelation phase table, consider Figure 3, which shows the rectified normalized signal from a piano rendition of one of the rhythmic patterns from Povel and Essens (1985). The pattern was rendered with a base inter-onset-interval of 300ms. On the left in Figure 4 the autocorrelation phase matrix is shown. On the right, the sum of the matrix is 7

shown. It is the standard autocorrelation. 3.3 Autocorrelation Phase Entropy As already discussed, is possible to improve significantly on the performance of autocorrelation by taking advantage of the distribution of energy in the autocorrelation phase matrix. The idea is that metrically-salient lags will tend to be have more spike-like distribution than non-metrical lags. Thus even if the autocorrelation is evenly distributed by lag, the distribution of autocorrelation energy in phase space should not be so evenly distributed. There are at least two possible measures of spikiness in a signal, variance and entropy. We focus here on entropy, although experiments using variance yielded very similar results. Entropy is the amount of disorder in a system. Shannon entropy H: H(X) = N X(i)log 2 [X(i)] (4) i=1 where X is a probability density. We compute the entropy for lag l in the autocorrelation phase matrix by as follows: A sum = N A l (i) (5) i=0 H l = N A l (i)/a sum log 2 [A l (i)/a sum ] (6) i=0 This entropy value, when multiplied into the autocorrelation, significantly improves tempo induction. For example, in Figure 5 we show the autocorrelation along with the autocorrelation multiplied by the entropy for the same Manos Xatzidakis show in in Figure 2. On the bottom observe how the detrended (1- entropy) information aligns well with the target lag and its multiples. (Detrending was done to remove a linear trend that favors short lags. Simulations revealed that performance is only slightly degraded when detrending is omitted.) Most robust performance was achieved when autocorrelation and entropy were multiplied together. This was done by detrending both the autocorrelation and entropy vectors, scaling them both between 0 and 1 and then multiplying them together. 8

3.4 Metrical hierarchy selection We now move away from the autocorrelation phase matrix for the moment and address task of selecting a winning metrical hierarchy. A rough estimate of meter can be had by simply summing hierarchical combinations of autocorrelation lags. In place of standard autocorrelation we use the product of autocorrelation and (1 - entropy) AE as described above. The likelihood of a duple meter M duple existing at lag l can be estimated using the following sum: M duple l = AE(l) + AE(2l) + AE(4l) + AE(8l) (7) The likelihood of a triple meter is estimated using the following sum: M triple l = AE(l) + AE(3l) + AE(6l) + AE(12l) (8) Other candidate meters can be constructed. using similar combinations of lags. A winning meter can be chosen by sampling all reasonable lags (e.g. 200ms <= l <= 2000ms) and comparing the resulting Ml values. Provided that the same number of points are used for all candidate meters, these Ml values can be compared directly, allowing for a single winning meter to be selected among all possible lags and all possible meters. Furthermore, this search is efficient given that each lag/candidate meter combination requires only a few additions. For the meter prediction simulations in Section 4 this was the process used to select the meter. 3.5 Prediction of tempo Once a metrical hierarchy is chosen, there are several simple methods for selecting a winning tempo from among the winning lags. One option is to pick the lag closest to a comfortable tapping rate, say 600ms. A second better option is to multiply the autocorrelation lags by a window such that more accent is placed on lags near a preferred tapping rate. The window can be applied either before or after choosing the hierarchy. If it is applied before selecting the metrical hierarchy, then the selection process is biased towards lags in the tapping range. We tried both approaches; applying the window before selection yields better results, but only marginally better (on the order of 1% better performance on the tempo prediction tasks described below). To avoid adding more parameters to our model we did not construct 9

our own windowing function. Instead we used the function (with no changes to parameters) described in Parncutt (1994): a Gaussian window centered at 600ms and symmetrical in log-scale frequency. 3.6 Alignment of predicted hierarchy with signal The autocorrelation phase matrix provides the necessary information for aligning the selected metrical hierarchy with a score. Such an alignment is useful for task like downbeat induction. Our strategy for alignment is to integrate information from the autocorrelation phase matrix at all levels in the selected metrical hierarchy. As an example of this process, consider again the first Povel & Essens pattern show in in Figure 3. The autocorrelation phase matrix is shown in Figure 4. Given that the rows represent relative phase, it is illustrative to distort the matrix into a disk, as seen in Figure 6. Here progressively slower (longer) lags are shown further from the origin. The metrical hierarchy selection algorithm described above in Section 3.4 selects a duple meter at lags 300, 600, 1200, 2400 and 4800 ms. (This is the correct set of lags. Recall that the pattern was rendered with 300ms inter-onset intervals.) If we display only these lags on the disk, the metrical structure of the pattern begins to emerge. See Figure 7. The metrical hierarchy selection algorithm chooses a small set of rows from the autocorrelation phase matrix. We will interpret these as defining a genuine metrical hierarchy. Recall that a metrical hierarchy is a set of nested, aligned periodicities. This suggests that slower lags must align with faster lags. This provides us with a strong constraint on how to generate an alignment. In a bottom-up fashion (from small lags to long lags) we will select a winner and then constrain subsequent winners to align with previous winners. Our constraint will not be a hard one. Instead we will simply multiply slower lags by the phase-aligned value at the closest faster level. (It is important not to have a hard constraint here in order to allow effects like syncopation to be seen. This topic is unfortunately out of the scope of this paper). This bottom-up level-by-level multiplication yields a new set of autocorrelation phase values that are accented based on the selected meter (Figure 8). Observe that without the bottom-up propagation of metrical information, the autocorrelation phase matrix reveals no preference for which of the nine events cycled at 4800ms should be selected as a downbeat. After the 10

bottom-up propagation, the correct downbeat is properly accented. This example only considers a short repeating pattern having no acceleration or deceleration. To apply the model to online tasks like beat induction, it is necessary to compute the model online on windowed audio and to cope with tempo changes. Our approach to this is to apply standard slow exponential decay to the autocorrelation phase matrix and to incorporate new evidence from the signal into the matrix such that there is some spreading of energy to near tempos. With this approach it is not necessary to rebuild the matrix but simply update it for each lag, thus making an efficient implementation possible. Another approach would be to use a Hidden Markov Model to smooth the window-by-window predictions. This is similar to the approach taken by Klapuri et al. (2005) to incorporate evidence in their three-level model. 4 Simulations We have run the model on several datasets. To test tempo induction we used the Ballroom and Song Excerpts databases from the ISMIR 2004 Tempo Induction contest. For testing the ability of the model to perform meter prediction we used the the Essen European Folksong database and the Finnish Folk Song database. We also include preliminary simulations on alignment using the 35 artificial patterns from Povel and Essens (1985) as well as 4.1 ISMIR 2004 Tempo Induction We used two datasets from the ISMIR 2004 Tempo Induction contest (Gouyon et al., 2005). The first dataset was the Ballroom dataset consisting of 698 wav files each approximately 30 seconds in duration encompassing eight musical styles. See Table 1 for a breakdown of song styles along with the performance of our model on the dataset. In the table, Acc. A is Accuracy A from the contest: the number of correct predictions within 4% of the target tempo. Acc. B is Accuracy B from the contest. It also takes into account misses due to predicting the wrong level of the metrical hierarchy. Thus answers are treated as correct if they are within 4% of the target tempo multiplied by 2,3,1/2 or 1/3. Acc C. is our own measure which also treats answers as correct if they are within 4% of the target tempo multiplied by 2/3 or 3/2. This gives us a measure of model failure due to predicting the wrong meter. 11

Table 1: Performance of model by genre on the Ballroom dataset. See text for details. Style Count Acc. A Acc. B Acc. C ChaChaCha 111 106 107 109 Jive 60 6 60 60 Quickstep 82 0 77 80 Rumba 98 84 85 92 Samba 86 78 79 83 Tango 86 81 82 83 Vienn.Waltz 65 0 57 64 Waltz 110 86 86 93 Global 698 441 633 664 We computed several baseline models for the ballroom dataset. These results are shown along with our best results and those of the contest winner, Klapuri et al. (2005), in Table 2. The Acorr Only model uses simple autocorrelation. The Acorr+Meter model incorporates the strategy described in this paper for using multiple hierarchically-related lags in prediction. The Acorr+Entropy uses autocorrelation plus entropy as computed on the phase autocorrelation matrix (but no meter). The full model could also be called Acorr+Entropy+Meter and is the one described in this paper. Klapuri shows the results for the contest winner. Two things are important to note. First, it is clear that both of our two main ideas, meter reinforcement ( Meter ) and entropy calculation ( Entropy ) aid in computing tempo. Second, the model seems to work well, returning results that compete with the contest winner. We also used the Song Excerpts dataset from the ISMIR 2005 dataset. This dataset consisted of 465 songs of roughly 20sec duration spanning nine genres. Due to space constraints, we do not report model performance on individual genres. In table Table 3 the results are summarized in a format identical to Table 2. Here it can be seen that our model performed slightly better than the winning model on Accuracy A but performed considerably worse on Accuracy B. In our view, Accuracy B is a more important measure because it reflects that the model has correctly predicted the metrical hierarchy but has simply 12

Table 2: Summary of models on the Ballroom dataset. See text for details. Model Acc. A Acc. B Acc. C Acorr Only 49% 77% 77% Acorr+Meter 58% 80% 85% Acorr+Entropy 41% 85% 85% Full Model 63% 91% 95% Klapuri 63% 91% 93% Table 3: Summary of models on the Song Excerpts dataset. See text for details. Model Acc. A Acc. B Acc. C Acorr Only 49% 64% 64% Acorr+Meter 50% 80% 85% Acorr+Entropy 53% 74% 74% Full Model 60% 79% 88% Klapuri 58% 91% 94% failed to report the appropriate level in the hierarchy. 4.2 Essen Database We computed our model on a subset of the Essen collection (Schaffrath, 1995) of European folk melodies. We selected all melodies in either duple (i.e. having 2 n eighth notes per measure; e.g. 2/4 and 4/4) or triple/compound meter (i.e having 3n eighth notes per measure; e.g. 3/4 and 6/8). This resulted in a total of 5507 melodies of which 57% (3121) were in duple meter and 43% (2386) were in triple/compound meter. The task was to predict the meter of the piece as being either duple or triple/compound. This is exactly the same dataset and task studied in Toiviainen and Eerola (2004). Our results were promising. We classified 90% of the examples correctly (4935 of 5507 correct). Our model performed better on duples than triple/compounds, classifying 94% of the duple examples correctly (2912 of 3121 correct) and 85% of the triple/compound examples correctly (2023 of 13

2386 correct). These success rates are similar to those in Toiviainen and Eerola (2004). However it is difficult to compare our approaches because their data analysis technique (stepwise discriminant function analysis) does not control for insample versus out-of-sample errors. Functions are combined using the target value (the meter) as a dependent variable. This is suitable for weighing the relative predictive power of each function but not suitable for predicting how well the ensemble of functions would perform on unseen data unless training and testing sets or cross-validation is used. Our approach used no supervised learning. 4.3 Finnish Folk Songs Database We performed the same meter prediction task on a subset of the Finnish Folksong database (Eerola and Toiviainen, 2004). This dataset was also treated by Toiviainen and Eerola (2004) and the selection criteria were the same. For this dataset we used 7139 melodies of which 80% (5720) were in duple meter and 20% (1419) were triple/compound meter. (For the Toiviainen et. al. study, 6861 melodies were used due to slightly more stringent selection criteria. However the ratio of duples to triple/compounds is almost identical.) Note that the datasets are seriously imbalanced: a classifier which always guesses duple will have a success rate of 80%. However given the relative popularity of duple over triple, this imbalance seems unavoidable. Our results were promising. We classified 93% examples correctly (6635 of 71239 correct). Again, our model performed better on duples than triple/compounds, classifying 95% of the duple examples correctly (5461 of 5720 correct) and 83% of the triple/compound examples correctly (1174 of 1419 correct). 4.4 Povel & Essens Patterns To test alignment (beat induction) we used a set of rhythms from Experiment 1 of Povel and Essens (1985). These rhythms are generated by permuting the interval sequence 1 1 1 1 1 2 2 3 and terminating it by the interval 4. These length-16 patterns all contain nine notes and seven rests, and are cycled for the oscillator. Their model works by applying a set of rules that forced the accentuation of (a) singleton isolated events, (b) the second of two isolated events and (c) the first and last of a longer group of isolated events. Of particular 14

importance is that they validated their model using a set of psychological experiments with human subjects. Our model predicted the correct downbeat (correct with respect to the Povel & Essens model) 97% of the time (34 of 35 patterns). The pattern where the model failed was pattern 27. Our interest in this dataset lies less in the error rate and more in the fact that we can make good predictions for these patterns without resorting to perceptual accentuation rules. 5 Discussion Though the model does not perform as well as Klapuri et. al. on Accuracy B of the Song Excerpts dataset, it still performs quite well on tempo extraction in general. It achieves this without complex multi-band preprocessing and without supervised learning. While we must compute the phase autocorrelation table, which is time consuming, there are other motivations for computing this table such as performing an alignment. Thus the time spent computing the table may be offset by an ability to reuse the data structure in several ways. Finally, we had the Ballroom and Song Excerpts dataset for nearly a month. Though our model does not use supervised learning and thus cannot explicitly cheat, we admit that it also possible to improve a nonparametric model by improving it using the same dataset for which one is reporting results. The model seems to perform basic meter categorization relatively well. It performed at competitive levels on both the Essen and the Finnish simulations. Furthermore it achieved good performance without risk of undergeneralizing due to overfitting from supervised learning. One area of current research is to see how well the model does at aligning (identifying the location of downbeats) in the Essen and Finnish databases. As evidenced by the Povel & Essens results, the model has potential for performing alignment of an induced metrical hierarchy with a musical sequence. Though we have many other examples of this ability performance, including some entertaining automatic drumming to Mozart compositions, we have yet to undertake a methodical study of the the limitations of our model on alignment. This, and related tasks like online beat induction, are areas of ongoing research. 15

6 Conclusions This paper introduces a novel way to detecting metrical structure in a music and to use meter as an aid in detecting tempo. Two main ideas were explored in this paper. First we discussed an improvement to using autocorrelations for musical feature extraction via the computation of an autocorrelation phase matrix. We also discussed computing the Shannon entropy for each lag in this matrix as a means for sharpening the standard autocorrelation. Second we discussed ways to use the autocorrelation phase matrix to compute an alignment of a metrical hierarchy with music. We applied the model to the tasks of meter prediction and tempo induction on large datasets. We also provided preliminary results for aligning the metrical hierarchy with the piece (downbeat induction). Though much of this work is preliminary, we believe the results in this paper suggest that the approach warrants further investigation. 7 Acknowledgements We would like to thank Fabien Gouyon, Petri Toiviainen and Tuomas Eerola for many helpful email correspondences. References Brown, J. (1993). Determination of meter of musical scores by autocorrelation. Journal of the Acoustical Society of America, 94:953 1957. Cemgil, A. T. and Kappen, H. J. (2003). Monte Carlo methods for tempo tracking and rhythm quantization. Journal of Artificial Intelligence Research, 18:45 81. Cemgil, A. T., Kappen, H. J., Desain, P., and Honing, H. (2001). On tempo tracking: Tempogram representation and Kalman filtering. Journal of New Music Research, 28:4:259 273. Cooper, G. and Meyer, L. B. (1960). The Rhythmic Structure of Music. The Univ. of Chicago Press. 16

Dixon, S. E. (2001). Automatic extraction of tempo and beat from expressive performances. Journal of New Music Research, 30(1):39 58. Eck, D. (2002). Finding downbeats with a relaxation oscillator. Psychol. Research, 66(1):18 25. Eerola, T. and Toiviainen, P. (2004). Digital Archive of Finnish Folktunes. [computer database]. University of Jyvaskyla. http://www.jyu.fi/musica/sks. Goto, M. (2001). An audio-based real-time beat tracking system for music with or without drum-sounds. Journal of New Music Research, 30(2):159 171. Gouyon, F., Klapuri, A., Dixon, S., Alonso, M., Tzanetakis, G., Uhle, C., and Cano, P. (2005). An experimental comparison of audio tempo induction algorithms. Soumis. Handel, S. (1993). Listening: An introduction to the perception of auditory events. MIT Press, Cambridge, Mass. Ifeachor, E. C. and Jervis, B. W. (1993). Digital Signal Processing: A Practical Approach. Addison-Wesley Publishing Company. Klapuri, A., Eronen, A., and Astola, J. (2005). Analysis of the meter of acoustic musical signals. IEEE Trans. Speech and Audio Processing. To appear. Large, E. W. and Kolen, J. F. (1994). Resonance and the perception of musical meter. Connection Science, 6:177 208. Parncutt, R. (1994). A perceptual model of pulse salience and metrical accent in musical rhythms. Music Perception, 11:409 464. Povel, D. and Essens, P. (1985). Perception of temporal patterns. Music Perception, 2:411 440. Schaffrath, H. (1995). The Essen Folksong Collection in Kern Format. [computer database]. Center for Computer Assisted Research in the Humanitites. 17

Scheirer, E. (1998). Tempo and beat analysis of acoustic musical signals. Journal of the Acoustical Society of America, 103(1):588 601. Toiviainen, P. and Eerola, T. (2004). The role of accent periodicities in meter induction: a classificatin study. In Lipscomb, S., Ashley, R., Gjerdingen, R., and Webster, P., editors, The Proceedings of the Eighth International Conference on Music Perception and Cognition (ICMPC8), Adelaide, Australia. Causal Productions. Volk, A. (2004). Exploring the interaction of pulse layers regarding their influence on metrical accents. In Lipscomb, S., Ashley, R., Gjerdingen, R., and Webster, P., editors, The Proceedings of the Eighth International Conference on Music Perception and Cognition (ICMPC8), Adelaide, Australia. Causal Productions. Vos, P., van Dijk, A., and Schomaker, L. (1994). Melodic cues for metre. Perception, 23:965 976. 18

600 15 AudioTrack 15.wav 500 400 autocorrelation 300 True lag 200 100 0 0 500 1000 1500 2000 2500 3000 3500 4000 lag (msec) Figure 2: Autocorrelation of a song by Manos Xatzidakis from the ISMIR 2004 Tempo Induction contest (15-AudioTrack 15.wav). The dotted vertical line marks the actual tempo of the song (563 msec, 106.6 bpm). Compare the flatness of the autocorrelation and the lack of alignment between peaks and the target. 19

1 Pattern 1 Povel & Essens (1985), piano 0.9 0.8 0.7 abs(x), normalized 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 time ms Figure 3: The rectified normalized signal generated by creating a piano rendering from a MIDI version of Povel & Essens Pattern 1. Two repetitions of the length-16 nine-event pattern are shown. See Povel and Essens (1985) for details. x 20

Figure 4: The autocorrelation phase matrix for Povel & Essens Pattern 1 computed for lags 250Ms through 500ms. The phase points are shown in terms of relative phase (0, 2π). On the right it is shown that taking the sum of the matrix by row yields exactly the autocorrelation. 21

1 15 AudioTrack 15.wav 0.8 autocorrelation 0.6 0.4 0.2 0 1 0.8 True lag 1 entropy 0.6 0.4 0.2 0 0 500 1000 1500 2000 2500 3000 3500 4000 lag (msec) Figure 5: Autocorrelation and entropy calculations for the same Manos Zatzidakis song shown in Figure 2. The top is the autocorrelation and is identical to Figure 2 except that it is scaled to [0, 1]. On the bottom is (1 - entropy), scaled to [0, 1] and detrended. Observe how the entropy spikes align well with the correct tempo lag of 563ms and with its integer multiples (shown as vertical dotted lines in both plots. 22

Figure 6: The autocorrelation phase matrix for Povel & Essens Pattern 1 shown as a disk with progressively slow (longer) lags shown further from the origin. 23

Figure 7: The autocorrelation phase matrix for Povel & Essens Pattern 1. Only those lags chosen by the metrical hierarchy selection algorithm (300, 600,1200,2400 and 4800ms) are shown. The outermost ring shows the entire 9-element repeating pattern. 24

Figure 8: The autocorrelation phase matrix for Povel & Essens Pattern 1 after bottom-up propagation of metrical information. Progressively slower lags (further out on the disk) are multiplied by the phase-adjusted values at the next faster (closer) level. This biases slower lags to be phase-aligned with faster lags. Notice that the outermost ring containing the 9-element repeating pattern now reflects metrical accenting, making it easy to select the correct downbeat. 25