Time Signature Detection by Using a Multi Resolution Audio Similarity Matrix

Similar documents
Drum Source Separation using Percussive Feature Detection and Spectral Modulation

Onset Detection and Music Transcription for the Irish Tin Whistle

Rhythm related MIR tasks

Robert Alexandru Dobre, Cristian Negrescu

Music Radar: A Web-based Query by Humming System

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

Automatic music transcription

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

CLASSIFICATION OF MUSICAL METRE WITH AUTOCORRELATION AND DISCRIMINANT FUNCTIONS

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Tempo and Beat Analysis

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

THE importance of music content analysis for musical

Melody Retrieval On The Web

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

Interacting with a Virtual Conductor

Violin Timbre Space Features

Timing In Expressive Performance

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

The song remains the same: identifying versions of the same piece using tonal descriptors

IMPROVING RHYTHMIC SIMILARITY COMPUTATION BY BEAT HISTOGRAM TRANSFORMATIONS

Voice & Music Pattern Extraction: A Review

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

OLCHS Rhythm Guide. Time and Meter. Time Signature. Measures and barlines

Krzysztof Rychlicki-Kicior, Bartlomiej Stasiak and Mykhaylo Yatsymirskyy Lodz University of Technology

Methods for the automatic structural analysis of music. Jordan B. L. Smith CIRMMT Workshop on Structural Analysis of Music 26 March 2010

CONTENT-BASED MELODIC TRANSFORMATIONS OF AUDIO MATERIAL FOR A MUSIC PROCESSING APPLICATION

Automatic Piano Music Transcription

Tempo and Beat Tracking

Music Database Retrieval Based on Spectral Similarity

PULSE-DEPENDENT ANALYSES OF PERCUSSIVE MUSIC

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

FULL-AUTOMATIC DJ MIXING SYSTEM WITH OPTIMAL TEMPO ADJUSTMENT BASED ON MEASUREMENT FUNCTION OF USER DISCOMFORT

Efficient Vocal Melody Extraction from Polyphonic Music Signals

MODELING RHYTHM SIMILARITY FOR ELECTRONIC DANCE MUSIC

Introductions to Music Information Retrieval

Automatic Labelling of tabla signals

Automatic Rhythmic Notation from Single Voice Audio Sources

Query By Humming: Finding Songs in a Polyphonic Database

The DiTME Project: interdisciplinary research in music technology

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

CS 591 S1 Computational Audio

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

Subjective Similarity of Music: Data Collection for Individuality Analysis

Measurement of overtone frequencies of a toy piano and perception of its pitch

10 Visualization of Tonal Content in the Symbolic and Audio Domains

Transcription of the Singing Melody in Polyphonic Music

A MID-LEVEL REPRESENTATION FOR CAPTURING DOMINANT TEMPO AND PULSE INFORMATION IN MUSIC RECORDINGS

Classification of Dance Music by Periodicity Patterns

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Topics in Computer Music Instrument Identification. Ioanna Karydi

Tempo Estimation and Manipulation

Wipe Scene Change Detection in Video Sequences

GRADIENT-BASED MUSICAL FEATURE EXTRACTION BASED ON SCALE-INVARIANT FEATURE TRANSFORM

Autocorrelation in meter induction: The role of accent structure a)

A wavelet-based approach to the discovery of themes and sections in monophonic melodies Velarde, Gissel; Meredith, David

IMPROVING GENRE CLASSIFICATION BY COMBINATION OF AUDIO AND SYMBOLIC DESCRIPTORS USING A TRANSCRIPTION SYSTEM

AUTOM AT I C DRUM SOUND DE SCRI PT I ON FOR RE AL - WORL D M USI C USING TEMPLATE ADAPTATION AND MATCHING METHODS

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Topic 4. Single Pitch Detection

Semi-supervised Musical Instrument Recognition

Repeating Pattern Extraction Technique(REPET);A method for music/voice separation.

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION

A Beat Tracking System for Audio Signals

A prototype system for rule-based expressive modifications of audio recordings

A Framework for Segmentation of Interview Videos

Comparison of Dictionary-Based Approaches to Automatic Repeating Melody Extraction

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

A Novel System for Music Learning using Low Complexity Algorithms

Piano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15

MELODY EXTRACTION FROM POLYPHONIC AUDIO OF WESTERN OPERA: A METHOD BASED ON DETECTION OF THE SINGER S FORMANT

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Computer Coordination With Popular Music: A New Research Agenda 1

Honours Project Dissertation. Digital Music Information Retrieval for Computer Games. Craig Jeffrey

Statistical Modeling and Retrieval of Polyphonic Music

TECHNIQUES FOR AUTOMATIC MUSIC TRANSCRIPTION. Juan Pablo Bello, Giuliano Monti and Mark Sandler

TOWARD AUTOMATED HOLISTIC BEAT TRACKING, MUSIC ANALYSIS, AND UNDERSTANDING

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

CSC475 Music Information Retrieval

Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Descriptors

An Examination of Foote s Self-Similarity Method

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Popular Song Summarization Using Chorus Section Detection from Audio Signal

Effects of acoustic degradations on cover song recognition

POLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING

Audio-Based Video Editing with Two-Channel Microphone

Chapter 2: Beat, Meter and Rhythm: Simple Meters

Music Information Retrieval

Melody transcription for interactive applications

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC

A Study of Synchronization of Audio Data with Symbolic Data. Music254 Project Report Spring 2007 SongHui Chon

AUTOMATIC MAPPING OF SCANNED SHEET MUSIC TO AUDIO RECORDINGS

Transcription:

Dublin Institute of Technology ARROW@DIT Conference papers Audio Research Group 2007-0-0 by Using a Multi Resolution Audio Similarity Matrix Mikel Gainza Dublin Institute of Technology, mikel.gainza@dit.ie Eugene Coyle Dublin Institute of Technology, eugene.coyle@dit.e Follow this and additional works at: http://arrow.dit.ie/argcon Part of the Other Engineering Commons Recommended Citation Gainza, M. & Coyle, E. Time signature detection by using a multi resolution audio similarity matrix. 22nd. Audio Engineering Society Convention, May 5-8, May 5-8, 2007, Vienna, Austria. This Conference Paper is brought to you for free and open access by the Audio Research Group at ARROW@DIT. It has been accepted for inclusion in Conference papers by an authorized administrator of ARROW@DIT. For more information, please contact yvonne.desmond@dit.ie, arrow.admin@dit.ie, brian.widdis@dit.ie. This work is licensed under a Creative Commons Attribution- Noncommercial-Share Alike 3.0 License

Audio Research Group Articles Dublin Institute of Technology Year 2007 by Using a Multi Resolution Audio Similarity Matrix Mikel Gainza Eugene Coyle Dublin Institute of Technology, mikel.gainza@dit.ie Dublin Institute of Technology, Eugene.Coyle@dit.ie This paper is posted at ARROW@DIT. http://arrow.dit.ie/argart/4

Use Licence Attribution-NonCommercial-ShareAlike.0 You are free: to copy, distribute, display, and perform the work to make derivative works Under the following conditions: Attribution. You must give the original author credit. Non-Commercial. You may not use this work for commercial purposes. Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one. For any reuse or distribution, you must make clear to others the license terms of this work. Any of these conditions can be waived if you get permission from the author. Your fair use and other rights are in no way affected by the above. This work is licensed under the Creative Commons Attribution-NonCommercial- ShareAlike License. To view a copy of this license, visit: URL (human-readable summary): http://creativecommons.org/licenses/by-nc-sa/.0/ URL (legal code): http://creativecommons.org/worldwide/uk/translated-license

Audio Engineering Society Convention Paper Presented at the 22nd Convention 2007 May 5 8 Vienna, Austria The papers at this Convention have been selected on the basis of a submitted abstract and extended precis that have been peer reviewed by at least two qualified anonymous reviewers. This convention paper has been reproduced from the author's advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 60 East 42 nd Street, New York, New York 065-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society. by Using a Multi- Resolution Audio Similarity Matrix Mikel Gainza, Eugene Coyle 2 Audio Research Group (Dublin Institute of Technology), Kevin St, Dublin 2, Ireland mikel.gainza@dit.ie 2 Audio Research Group (Dublin Institute of Technology), Kevin St, Dublin 2, Ireland eugene.coyle@dit.ie ABSTRACT A method that estimates the time signature of a piece of music is presented. The approach exploits the repetitive structure of most music, where the same musical bar is repeated in different parts of a piece. The method utilises a multi-resolution audio similarity matrix approach, which allows comparisons between longer audio segments (bars) by combining comparisons of shorter segments (fraction of a note). The time signature method only depends on musical structure, and does not depend on the presence of percussive instruments or strong musical accents.. INTRODUCTION Standard staff music notation utilises symbols to indicate note durations (onset and offset times). The pitch of the notes is derived from the key signature and the position of the note symbols in the staff. In addition, the information regarding the tempo, the commencement and end of the bars, and the time signature is also included in the staff []. Western music describes the time signature as the ratio between two integer numbers, where the numerator indicates how many beats are in a bar and the denominator specifies the note reference. There are numerous algorithms that perform pitch detection [2, 3], onset detection [4, 5], key signature estimation [6, 7] and tempo extraction [8, 9]. However, the detection of the metrical structure or the time signature remains a relatively unexplored area. In [0], Brown obtains the meter by using the autocorrelation function under the assumption that the frequency of repetition of notes is greater on the downbeat of the musical bar. Gouyon estimates the meter (duple or triple) by tracking periodicities of low level features around beat segments []. Even though the title of [2] is related to music meter, the approach focuses on detecting the time signature within Greek traditional music by using an audio similarity matrix (ASM) [3, 4], which compares all possible combinations of two frames of the domain utilised to represent the audio file (e.g.: time domain, spectrogram, cepstrum ). The method described in [2] calculates

the numerator and denominator of the time signature independently. The denominator is obtained by tracking the similarities in the audio signal between instants separated by beat duration. Thus, it is assumed that successive notes will be similar. In a similar manner to [0], the time signature numerator is estimated by analysing the similarities between successive bars. However, both methods [0], [2] discard similarities between bars located at different points in the music. In this paper, a time signature detection algorithm is presented, which estimates the number of beats in a musical bar. The method is based on the use of audio similarity matrix (ASM) [3]. The ASM exploits the repetitive nature of the structure of music, where the same musical bars, chorus or phrases frequently repeat in different parts of a musical piece. The presented approach seeks repetitions in any two possible musical bars without the requirement of the periodic repetition of any musical event or the repetition of successive musical bars. Thus, the limitations of previous approaches are overcome. Section 2 describes the different components that comprise the time signature detector. In Section 3, a set of results that evaluate the time signature detector is introduced. Finally, a discussion of the results obtained and some future work are presented in Section 4. 2. PROPOSED APPROACH The different parts of the time signature detection system here are described in this section. Firstly, by using prior knowledge of the tempo of the song, a spectrogram is generated with a frame length equal to a fraction of the duration of the beat of the song. Following this, the first note of the song is detected. A reference ASM is then produced by using Euclidian distance measures between the frames starting at the first note. Such fine representation allows the approach to capture the similarities between small musical events such as short notes. Then, a multi-resolution ASM approach is undertaken in order to form other audio similarity matrices representating a variety of bar length candidates. Having formed all the new ASMs within a certain range, the new ASM which provides the highest similarity between its components will correspond to the bar length. Following this, a method to detect the anacrusis of the song is also introduced, which is an anticipatory note or notes occurring before the first bar of a piece [5]. Finally, the time signature is obtained and a more accurate tempo estimation is also provided. 2.. Spectrogram In order to provide a more accurate input to the problem of interest here (time signature detection), the tempo is semi-automatically estimated in the same manner as [0] and [], where the tempo and the beat locations were respectively known. By using the tempo information, a spectrogram is generated from windowed frames of length L, which are equal to a fraction (/32) of the duration of the beat of the song. The hop size H is equal to half of the frame length L (/64 of the beat duration). L j(2π / N ) k. n X ( m, k) = abs + x( n mh ) w( n)* e ( ) n= 0 where w(n) is a Hanning window that selects an L length block from the input signal x(n), and where m, N and k are the frame index, FFT length and bin number respectively. It should be noted that k {:N/2}. Following this, the first note of the song is detected, by obtaining the energy of the frequency ranges E = [:3000] Hz and E2 = [5000:2000] Hz respectively [6]. This will disable the columns of the spectrogram that contain no useful information. If a note has been played, it is expected that E has a much higher value than E2. Otherwise, the energy will be spread over the frequency axis, and it will be assumed that the signal does not contain musical notes. Thus, by using a high threshold Tn, the first note played in the song will be estimated as follows: E < Tn E2 2.2. Reference Audio Similarity Matrix (2 ) An Audio Similarity Matrix [3] is built by comparing all possible combinations of two spectrogram frames by utilising the Euclidian Distance Measure. Thus, the measure of similarity between two frames m= a and m=b is given by: N / 2 ASM ( a, b) = [ X ( a, k) X ( b, k) ] (3 ) k= 2 AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 2 of 8

As an example, the spectrogram of an excerpt of a MIDI generated song is depicted of Figure, which is played in a 6/8 time signature. The bar lines are also depicted in white, where it can be seen that the excerpt comprises five complete bars and one incomplete bar. show high similarity. This indicates that bars, 4 and 5, and bars 2 and 3 respectively contain similar notes in their respective musical bars, as can be appreciated from a visual inspection of Figure. Consequently, the components of an ASM with a resolution equal to the length of the musical bars will show a high degree of similarity. 4000 3500 8 Frequency (Hz) 3000 2500 2000 500 000 500 Time (s) 7 6 5 4 3 2 D D2 D3 D4 0 0 2 3 4 5 6 7 8 Time (s) Figure : Spectrogram of a song played in 6/8 0 0 2 3 4 5 6 7 8 Time (s) Figure 2 ASM of Figure s example D5 The ASM of Figure s spectrogram is depicted in Figure 2, where the brightness of each matrix cell provides a measure of the similarity between two frames. Thus, a bright and a dark matrix cell represent a dissimilar and similar comparison respectively. It should be noted that the presented time signature detector is designed to work with real audio signals. However, a MIDI example has been utilised for illustration purposes, since this type of format provides steady signals using constant tempo, which generates clearer figures. 2.2.. Multi-resolution matrices The ellipses depicted in Figure 2 show the groups of cells in the audio similarity matrix that contain the comparisons between the frames of each possible combination of two musical bars. As an example, the group -2 denotes the comparison between the frames of bar and 2, where the first frame of bar is compared against the first frame of bar 2, the second frame of bar is compared against the second frame of bar 2 and so on. From Figure 2, it can be appreciated that the group of cells denoted as 2-3, 4-5, -4 and -5 The existence of any time signature within the 2/2 to 2/8 range is investigated, including complex time signatures such as 5/4, 7/8 and /8. Thus, the range of number of beats in a bar considered in this method is comprised in the range {2:2}. In addition, the maximum length of the bar is restricted to 3.5 s, which corresponds to a musical bar formed by 2 beats played with a bpm = 205. In order to obtain the time signature of the piece, the method successively combines integer numbers of components of the ASM to form groups of components of length Bar. Considering that the length of the spectrogram frame is equal to /64 of the beat duration, the range of Bar will be within {2*64:2*64}. Thus, each of the values of Bar corresponds to a bar length candidate. As an example, the combination of 64*6 = 384 components will correspond to a duration of 6 beats. As it can be seen in Figure, this duration corresponds to the length of the musical bar of the song. For each of the bar length candidates Bar, the generation of a new ASM will be simulated. This is achieved as follows; Firstly, the diagonals of one side of AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 3 of 8

the symmetric ASM (see Figure 2) which are integer multiples of Bar are extracted. Each of the diagonals provides information about the similarities between components of musical bar candidates separated by a different amount of bars. As an example, the diagonals depicted as D and D 2 in Figure 2 provide information of components separated by one bar and two bars respectively. Next, each of the diagonals is partitioned into nonoverlapping data segments of length equal to the bar length candidate Bar, which we denote as G, and an incomplete segment, which we denote as P. As an example, the components inside the ellipses located at the end of the x axis side of Figure 2: 5-6, 4-6, 3-6, 2-6 and -6, correspond to the incomplete segments P. The remaining ellipses of Figure 2 group the components of each of the complete segments G (e.g: components inside the Ellipse -2). Then, a similarity measure of each of the complete and incomplete segments, which we denote as SCS and SIS, provides the measure of the similarity between two bars. The similarity measure is calculated as follows: SCS = SIS = Bar i= r i= Bar r G P 2 i 2 i ( complete bars) ( incomplete bars) (4 ) where Gi and Pi are the i th component of the complete and incomplete segments respectively, and where r is the length of the incomplete segment. Each of the SCS and SIS measurs corresponds to a component of the new audio similarity matrix. The combination of these measures simulates the generation of an ASM from a spectrogram with a frame length equal to a multiple of the subdivision of the note beat. Considering Figure 2 example, the generation of a new ASM by grouping the components contained in the white ellipses will be simulated as in Figure 3. As an example, SCS(,2) and SIS(5,6) correspond to the similarity measure between bars and 2, and 5 and 6 respectively. It should be noted that only one of the symmetric sides of the ASM is considered. In addition, the main diagonal is also discarded, which does not provide any additional useful information. SIS (5,6) SCS (4,5) SIS (4,5) SCS (3,4) SCS (3,5) SIS (3,6) SCS (2,3) SCS (2,4) SCS (2,5) SIS (,6) SCS (,2) SCS (,3) SCS (,4) SCS (,5) SIS,6) Figure 3: New ASM of Figure 2 s example In order to measure the similarity of each new ASM, SM, the following equation is utilised: sc si Bar SCSi + r SISi SM i= i= = Bar sc + r si (5 ) where s c and s i correspond to the number of SCS and SIC segments respectively. This equation weights the segments according to the length r. Having obtained the SM of all the new ASMs, the bar length associated with the highest SM is deemed to be the bar length of the entire piece. Similarity measure (SM) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0. 2 3 4 5 6 7 8 9 0 2 beats/bar Figure 4: Beats/bar detection of Figure s example The multi-resolution audio similarity matrix approach allows comparisons between longer segments (bars) by AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 4 of 8

combining shorter segments (/32 of a note reference). The method avoids having to generate a new spectrogram and a new audio similarity matrix for each different frame length considered in the analysis. In addition, the use of short segments provides good time resolution, which is required in order to compare individual notes located in different bars. In Figure 4, the SM of Figure 2 s example for all the range of bar length candidates is displayed in Figure 4, where it can be seen that the highest SM value corresponds to 6 beats in the bar. 2.3. Anacrusis Detection The first note of the song displayed in Figure 2 corresponds to the first note of the first musical bar. However, this is not always the case, where other notes can be played before the first bar. In this case, the boundaries of the segmented groups from the diagonals of the ASM will not fully correspond to the start and finish of the musical bars. This problem is addressed in [7], where the location of the first beat of the first bar is obtained for dance music songs played in 4/4. The songs are successively segmented into bars by covering each possible case of groups of eight notes before the first bar. Then, an ASM is generated for each of the cases to find the ASM with more similar components. Best Similarity measure (SM) 0.9 0.8 0.7 0.6 0.5 2*64=28 frames, which results in a shift from the origin of ASM (,) to ASM (29,29). The anacrusis range is equal to the Bar range minus one full beat. Thus, for the case of a grouping of 3 beats, the maximum anacrusis value will be 2 beats. As an example, an anacrusis of 2 eight notes is added to the example of Figure 2. The result of the detection is shown in Figure 5, where it can be seen that the most similar measure was obtained when the ASM was shifted approximately 2 beats. 2.4. Time Signature Estimation Having obtained the number of beats B that provides the most similar measure SM for the entire beat and anacrusis range, the time signature is estimated. The time signature denominator is obtained by rounding B to the nearest integer value. Then, the denominator will be obtained as follows: if the number of beats B estimated is 2, 3, 4, 5, 6, 7, 8, 9, 0, or 2, the estimated time signature will be detected as 2/2, 3/4, 4/4, 5/4, 6/8, 7/8, 8/8, 9/8, 0/8, /8 or 2/8 respectively. Then, 2/2 and 8/8 will be estimated as 4/4 by just halving and doubling the tempo respectively. Since the tempo does not remain constant through the entire tune, B will rarely be an integer number. Thus, in order to provide a more accurate average tempo, the following equation is applied: B tempo newtempo = (6 ) round(tempo) where tempo is a semi-automatic tempo extraction 0.4 0.3 3. RESULTS 0.2 0 2 4 6 8 0 2 delay in beats (Anacrusis) Figure 5: Anacrusis detection example In order to detect the anacrusis of the song, a similar method to [7] is implemented by adding a sliding offset from the origin of the ASM, which is also a multiple of the subdivision of the beat duration. Thus, an anacrusis of 2 beats will correspond to an offset of In order to evaluate the presented approach, a set of audio signals selected from commercial CD recordings is utilised. The songs are listed in Table, where a large variety of time signatures and genres are represented in the testbed. An excerpt of approximately 2 seconds was extracted in each song to obtain the time signature of the piece. From Table, BPM and ana correspond to the semi-automatic tempo and the anacrusis respectively. AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 5 of 8

Song Song Artist Time BPM ana num Sig Eleven Primus /8 230 0 2 Windows To Steve Vai /8 243 0 The Soul 3 Watermelon In Frank 9/4 55 0 Easter Hay Zappa 4 ScatterBrain Jeff Beck 9/8 250 0 5 Take It To The The Eagles 3/4 90 0 Limit 6 Doing It All Huey 2/8 275 6 For My Baby Lewis & The News 7 Forces Koop 4/4-200 0 Darling 8/8 8 Sliabh Danu 6/8 90 2 9 Money Pink Floyd 7/8 20 0 0 Whirl The Jesus Lizard 5/4 50 0 Table : TestBed content The results can be seen in Table 2, where newbpm, CTS and Cana denote the new estimated tempo value, correct time signature detection and correct Anacrusis detection respectively. Song num ana TimeSig newbpm CTS Cana /8 228 YES NO 2 /8 242 YES NO 3 0 2/2 53 NO YES 4 0 /8 248 NO NO 5 3/4 90 YES NO 6 9 2/8 276 YES NO 7 0 8/8 208 YES YES 8 2 6/8 200 YES YES 9 0 7/8 2 YES YES 0 0 5/4 53 YES YES Table 2: Results Similarity measure 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 2 3 4 5 6 7 8 9 0 2 Beats/bar Figure 6: Beats/bar detection of Sliabh Figure 7 depicts the similarity detection function of Eleven, which is played in the infrequent time signature /8. It can be seen that a very distinctive peak in the function arises at beats. Similarity measure 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0. 2 3 4 5 6 7 8 9 0 2 Beats/bar Figure 7: Beats/bar detection of Eleven In Figure 6, the similarity detection function of the song num. 8 Sliabh is depicted. The song consists on a pipe playing solo, where the tempo is not maintained constant over the song. This is apparent in Figure 6, where the most similar measure was obtained for a grouping of 5.6 beats. However, since the nearest integer is 6 beats, the time signature is correctly estimated. 4. DISCUSSION AND FUTURE WORK A system that detects the time signature of a piece of music has been presented. In addition, a method to detect the anacrusis of a song has also been introduced. The system only depends on musical structure, and does not depend on the presence of percussive instruments, strong musical accents or a particular metric structure. The system can detect simple time signatures such as AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 6 of 8

4/4 as well as complex time signatures such as /8. The results show the robustness of the time signature detector for a variety of time signatures, where only the song num 3 and 4 are detected incorrectly. It should be noted that the bar length of song num 3 is longer than the maximum of 3.5s allowed in the approach. However, by allowing a maximum bar length of s and by increasing the length of the excerpt to m, the correct number of beats is detected. This can be seen in Figure 8, where a clear peak in the 9 beats location arises. By applying the method to estimate the time signature described in Section 2.4, Figure 8 s detection will be estimated as 9/8, since it is assumed that a bar of 9 beats will be divided into eight notes. However, a further classification based on the tempo could be incorporated to select the denominator of the time signature. Similarity measure (SM) 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 2 3 4 5 6 7 8 9 0 2 beats/bar Figure 8: Beats/bar detection of Watermelon in Easter Hay Only two of the excerpts of the songs were played using anacrusis. The system anticipated the correct number of notes preceding the first barline in one of the two cases. However, an anacrusis of just one beat was also detected in songs where there were no notes before the first bar. This can be due to deviations of the tempo that occur in a song, which can generate musical bars with different lengths. Consequently, improving the accuracy of the anacrusis detection should be considered as further work. The system assumes that there is no time signature change trough the tune. A modification of the algorithm to adapt it to bar length deviations, tempo changes and time signature changes warrants future work. 5. ACKNOWLEDGEMENTS Work supported by European Community under the Information Society Technologies (IST) programme of the 6th FP for RTD - project EASAIER contract IST- 033902. We would like to thank Dan Barry and David Dorran for all the relevant discussions regarding the topic of this paper and the proof-reading of the same. 6. REFERENCES [] Bent, I. D. and Hughes, D. W, "Notation". Grove Music Online 2006. http://www.grovemusic.com. Ed. L. Macy. [2] Klapuri, A., Signal Processing Methods for the Automatic Transcription of Music. Phd Thesis, 2004. [3] Martin, K., Automatic transcription of simple polyphonic music: Robust front end processing. MIT Media Laboratory. 996 [4] Duxbury, C., et al. Complex Domain Onset Detection For Musical SIgnals. In Proc of 6th Int. Conference on Digital Audio Effects (DAFx-03). 2003. London, UK. [5] Gainza, M., B. Lawlor, and E. Coyle. Onset Detection Using Comb Filters. In Proc of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. 2005. [6] Chai, W. and B. Vercoe. Detection Of Key Change In Classical Piano Music. In Proc of ISMIR. 2005. London. [7] Pauws, S. Musical key extraction from audio. In Proc of International Symposium on Music Information Retrieval, Barcelona. 2004. [8] Scheirer, E., Tempo and Beat Analysis of Acoustic Musical Signals. J. Acoust. Soc. Am., 998. 03(): p. 588-60. [9] Davies, M.E.P. and M.D. Plumbley. Causal Tempo Tracking of Audio. In Proc Int.l Conference on Music Information Retrieval., Barcelona, Spain. 2004. [0] Brown, J.C., Determination of the meter of musical scores by autocorrelation. Journal of the Acoustical Society of America, 993. 4(94): p. 953-957. [] Gouyon, F. and P. Herrera. Determination of the meter of musical audio signals: Seeking AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 7 of 8

recurrences in beat segment descriptors. In Proc of AES 4 thconvention. 2003. [2] Pikrakis, A., I. Antonopoulos, and S. Theodoridis. Music Meter And Tempo Tracking From Raw Polyphonic Audio. In Proc of 5th International Conference on Music Information Retrieval-ISMIR 2004. [3] Foote, J. Visualizing Music and Audio using Self-Similarity. In Proc of ACM Multimedia. 999. Orlando. [4] Foote, J. and S. Uchihashi. The beat spectrum: a new approach to rhythm analysis. 200. [5] Dogantan, M, "Anacrusis". Grove Music Online. http://www.grovemusic.com. Ed. L. Macy. 2007 [6] Amatriain, X., et al., Spectral Processing, In Proc Digital Audio Effects, DAFX. 2002, John Wiley & Sons. Chapter 0. [7] O Keeffe, K., Dancing Monkeys (Automated creation of step files for Dance Dance Revolution). MEng Thesis. 2003. AES 22nd Convention, Vienna, Austria, 2007 May 5 8 Page 8 of 8