Beat Tracking based on Multiple-agent Architecture A Real-time Beat Tracking System for Audio Signals w

Similar documents
Beat Tracking based on Multiple-agent Architecture A Real-time Beat Tracking System for Audio Signals

Music Understanding At The Beat Level Real-time Beat Tracking For Audio Signals

Musical acoustic signals

An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds

Real-time beat tracking for drumless audio signals: Chord change detection for musical decisions

A Beat Tracking System for Audio Signals

An Empirical Comparison of Tempo Trackers

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

158 ACTION AND PERCEPTION

Computer Coordination With Popular Music: A New Research Agenda 1

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Measurement of overtone frequencies of a toy piano and perception of its pitch

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

THE importance of music content analysis for musical

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

Tempo and Beat Analysis

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

Sentiment Extraction in Music

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions

Analysis of Musical Content in Digital Audio

BayesianBand: Jam Session System based on Mutual Prediction by User and System

Interacting with a Virtual Conductor

Music Understanding By Computer 1

Week 14 Music Understanding and Classification

Human Preferences for Tempo Smoothness

Automatic Generation of Drum Performance Based on the MIDI Code

MUSIC CONTENT ANALYSIS : KEY, CHORD AND RHYTHM TRACKING IN ACOUSTIC SIGNALS

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Student Performance Q&A: 2001 AP Music Theory Free-Response Questions

CS229 Project Report Polyphonic Piano Transcription

Chapter Five: The Elements of Music

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Extracting Significant Patterns from Musical Strings: Some Interesting Problems.

The Yamaha Corporation

A prototype system for rule-based expressive modifications of audio recordings

A Robot Listens to Music and Counts Its Beats Aloud by Separating Music from Counting Voice

6.5 Percussion scalograms and musical rhythm

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Automatic meter extraction from MIDI files (Extraction automatique de mètres à partir de fichiers MIDI)

A ROBOT SINGER WITH MUSIC RECOGNITION BASED ON REAL-TIME BEAT TRACKING

Audio-Based Video Editing with Two-Channel Microphone

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION

TEMPO AND BEAT are well-defined concepts in the PERCEPTUAL SMOOTHNESS OF TEMPO IN EXPRESSIVELY PERFORMED MUSIC

Music Representations

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

Beat - The underlying, evenly spaced pulse providing a framework for rhythm.

HST 725 Music Perception & Cognition Assignment #1 =================================================================

Robert Alexandru Dobre, Cristian Negrescu

Simple Harmonic Motion: What is a Sound Spectrum?

Automatic Rhythmic Notation from Single Voice Audio Sources

Music Radar: A Web-based Query by Humming System

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Automatic Laughter Detection

jsymbolic 2: New Developments and Research Opportunities

Sound visualization through a swarm of fireflies

EXPLORING THE USE OF ENF FOR MULTIMEDIA SYNCHRONIZATION

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies

Machine Learning Term Project Write-up Creating Models of Performers of Chopin Mazurkas

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach

Query By Humming: Finding Songs in a Polyphonic Database

Musical frequency tracking using the methods of conventional and "narrowed" autocorrelation

Melody transcription for interactive applications

Melody Retrieval On The Web

A Learning-Based Jam Session System that Imitates a Player's Personality Model

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Consonance perception of complex-tone dyads and chords

A REAL-TIME SIGNAL PROCESSING FRAMEWORK OF MUSICAL EXPRESSIVE FEATURE EXTRACTION USING MATLAB

Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

An Examination of Foote s Self-Similarity Method

Automatic Laughter Detection

The Human, the Mechanical, and the Spaces in between: Explorations in Human-Robotic Musical Improvisation


CHAPTER 3. Melody Style Mining

FULL-AUTOMATIC DJ MIXING SYSTEM WITH OPTIMAL TEMPO ADJUSTMENT BASED ON MEASUREMENT FUNCTION OF USER DISCOMFORT

Automatic music transcription

Temporal coordination in string quartet performance

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

ESTIMATING THE ERROR DISTRIBUTION OF A TAP SEQUENCE WITHOUT GROUND TRUTH 1

Polyrhythms Lawrence Ward Cogs 401

Real-time spectrum analyzer. Gianfranco Miele, Ph.D

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Analysis of local and global timing and pitch change in ordinary

arxiv: v1 [cs.sd] 8 Jun 2016

How to Obtain a Good Stereo Sound Stage in Cars

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Automatic Music Clustering using Audio Attributes

Topic 10. Multi-pitch Analysis

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

Effect of room acoustic conditions on masking efficiency

CS 591 S1 Computational Audio

Rhythm together with melody is one of the basic elements in music. According to Longuet-Higgins

CSC475 Music Information Retrieval

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

Arts Education Essential Standards Crosswalk: MUSIC A Document to Assist With the Transition From the 2005 Standard Course of Study

Expressive performance in music: Mapping acoustic cues onto facial expressions

Transcription:

From: Proceedings of the Second International Conference on Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved. Beat Tracking based on Multiple-agent Architecture A Real-time Beat Tracking System for Audio Signals w Masataka Goto and Yoichi Muraoka School of Science and Engineering, Waseda University 3-4-10hlatbo Shinjuku-ku, Tokyo 169, JAPAN. {goto, muraoka} @muraoka.info.waseda.ac.jp Abstract This paper presents an application of multiple-agent architecture to beat tracking for musical a~oustic signals. Beat tracking is an important initial step in computer understanding of music and is useful in various multimediapplications. Most previous beat-tracking systems dealt with MIDI signals and were not based on a multiple-agent architecture. Our system can recognize, in real time, temporal positions of beam in real-world audio signals that contain sounds of various instruments. Our application of multiple-agent architecture enables the system to handle ambiguous situations in interpreting real-world input signals and to examine multiple hypotheses of beat positions in parallel. Even if some agents lose track of the beat. other agents will maintain the correct hypothesis. Each agent is able to interact with other agents to track beats cooperatively, and self-evaluate the reliability of its hypothesis on the basis of the current input situation, and adapt to the current situation in order to maintain the correct hypothesis. These agents have been implemented on different processing element~ in a parallel computer. Our experimental results show that the system is robust enough to handle audio signals sampled fnml commercially distributed compact discs of popular songs. Introduction Multiple-agent architectures have recently been applied in various domains. This paper describes our application of multiple-agent architecture to beat tracking for musical acoustic signals. In our formulation, beat tracking means tracking the temporal positions of quarter notes, just as people keep time to music by hand-clapping or foot-tapping. There are various ambiguous situations that occur when a system interprets real-world input audio signals like those sampled from compact discs. Multiple-agent architecture has the advantages of interpreting those signals and tracking beats in various ways, because different agents can examine multiple hypotheses of beat positions in parallel according to different strategies. The main contribution of this paper is to show that such a multiple-agent architecture is actually useful and effective for a practical real-world application, namely, beat tracking. Beat tracking is an important initial step in computer emulation of human music understanding, since beats are fundamental to the perception of Western music. A person who cannot completely segregate and identify every sound component can nevertheless track musical beats. It is almost impossible to understand music without perceiving beats, since the beat is the basic unit of the temporal structure of music. Moreover, musical beat tracking is itself useful in various applications, such as video editing, audio editing, stage lighting control, and music-synchronized CG animation (Goto Muraoka 1994). We therefore first build a computational model of beat perception and then extend the model, just as a person recognizes higher-level musical events on the basis of beats. Various beat-tracking related systems have been undertaken in recent years (Dannenberg& Mont-Reynaud i 987; Desain & Honing 1989; Allen & Dannenberg 199{); Driesse 1991; Rosenthal 1992; Desain & Honing 1994; Vercoe 1994; Large 1995). Some previous systems (Allen & Dannenberg 199{); Rosenthal 1992) have maintained multiple hypotheses to track beats, and an earlier paper (Rosenthal, Goto, Muraoka 1994) has presented the advantages of the strategy of pursuing multiple hypotheses. Most of the systems maintaining multiple hypotheses, however, were not based on a mnltiple-agent architecture. The one described in (Allen & Dannenberg 1990) examined two or three hypotheses by beam search and tracked beats in real time. It dealt only with MIDI signals as its input and was not able to deal with audio signals played on several musical instruments, however. Another MIDI-based system (Rosenthal 1992) maintained number of hypotheses that were periodically ranked and selected. Those hypotheses were examined sequentially and the system did not work in real time. We built a beat tracking system that processes real-world audio signals that contain the sounds of various instruments and that recognizes the temporal positions of beats in real time. Our system is based on multiple-agent architecture in which multiple hypotheses are maintained by programmatic agents using different strategies for beat-tracking. Berausc the input signals are examined from the viewpoints of these various agents, various hypotheses can emerge. Agents that pay attention to different frequency ranges, for example, may track different beat positions. This multiple-agent architecture enables the system to cope with difficult beat-tracking situations: even if some agents lose track of beats, the system will track beats correctly as long as other agents maintain the correct hypothesis. Each agent is capable of interaction, self-evaluation, mid adaptation. In making a hypothesis, the agent interacts with other agents to track beats cooperatively. Each agent then Goto 103

From: evaluates Proceedings the of reliability the Second of International its own hypothesis Conference on on the Multiagent basis ofsystems. because Copyright there 1996, is not AAAI necessarily (www.aaai.org). a single All specilic rights reserved. st)und that the current input situation, and the most reliable h)2~othesis is directly indicates the beat positi,n; the beat is a perceptual considered the final output, if the reliability of a hypothesis concept that a human feels in InUSiC. There are va.rions ambiguous becomes high enough, the agent tries to adapt to the current situation by adjusting a parameter that controls its strategy in order to maintain the correct hypothesis. To perform this computationally intensive task in real time. the system has been implemented on a parallel computer, the Fujitsu API000. Each agent and frequencyanalysis module has been implemented on a different processing element. In our experiment with audio signals sampled fronl compact discs, the system correctly tracked beats in 34 out of 40 popular songs that did not include drumsounds. This result shows that our beat-tracking model based on multiple-agent architecture is robust enough to handle real-world audio signals. situations, such as ones where several events ob- tained by frequency analysis may correspond it) a heat and where di ft erent inter-beat intervals (the temporal difference between two successive beats) seem plausible. In addition. higher-level processing using musical knowledge is necessary for making context-dependent decisions, such ~ks determining whether a beat is strong or weak and evaluating which is the best interpretation in an ambiguousituation. Our solution to the problem of handling ambiguous situations is to maintain multiple hypotheses, each of which corresponds to a provisional or hypothetic. d interpretation of the input. A real-time system using only a single hypt~thesis is subject to garden-path errors. A multiple-hypothesis system can pursue several paths simultaneously, and laler decide Multiple-agent Architecture for Beat Tracking which one was correct. In other words, in real-time beat In this section we specify the beat tracking problem that tracking these hypotheses represent the results of predicting we are dealing with and present the main difticulties of the next beat in different ways and it is impossible to know in tracking beats: ambiguity of interpretation and the ]teed for advance which one will be correct (because tile future events context-dependent decisions, difficulties which are common are not available). to other real-world perceptual problems. We then describe the multiple-agent architecture to address the beat tracking problem, defining our agents and outlining their interaction. Multiple-agent Architecture To examine multiple hypt~theses in parallel, we use a Beat Tracking Problem In our formulation, beat tracking is delined as a process that organizes music into almost regularly spaced beats corresponding to quarter notes. Our beat tracking problem is thus to obtain an appropriate sequence of beat times (temporal positions of beats) that corresponds to input musical audio signals (Figure 1). This sequence of beat times is called the quarter-note level. We also address the higher-level beat tracking problem of determining whether a beat is strong or weak (beat type) ] under the assumption that the timesignature of an input song is 4/4. This is the problem of tracking beats at the half-note level. There are various difficulties in tracking the beats in realworld musical acoustic signals. The simple technique of peak-linding with a threshold is not sufficient since there are many energy peaks that are not directly related to beats. Multiple interpretations of beats are possible at any given point IIn this paper, a strong beat is either the lirst nr third quarler note in a measure; a weak beat is the second or fourth. multiple-agcir architectttre in which agents with different strategies interact through cooperation and competition to track beats (Figure 2). Several delinitions of the term agent have been proposed (Minsky 1986: Maes 1990; SIR~ham 1993; CACM 1994; Nakatani, ()kuno, & Kawabata 1994; ICMAS 19957 ill our terminology, the tern] agent Ineans a software compunent that satisfies the following three reqnirements: 1. the agent interacts with other agents to perform a given t~sk. 2. the agent evaluates its own behavior on the basis t)f the input. 3. the agent adapts to the input by adjustitlg its ow, behavior. Each agent maintains a beat-position hypothesis, which consists of a predicted next-beat lime, its heat type (strong inter-beat interval predict Musical acousuc signals Beat times (quaaer n te level) J I [ I I " // if/, Bsatlype ] Strong Weak Strong Weak Strong Weak ~ha f-nole level)t~ Figure 1: Beat tracking problem. I Figure 2: agents. Agent 2 I I I I I l Agent 3 I I I I I I.. I.., i Agent4 I I I I. I I I t. Agent 5... I I I I I I. I J predicted next-beat time fimc Multiple hypotheses maintai,ed by multiple 104 ICMAS-96

"u.ntl-1 I I I lp"r h,bitt lime prediction How field Figure 3: Interaction between agents through a prediction field. A.._._t~i~ O ]~ Higher-level" compact disc ~;~ ~\,J ~,~ checkers audio, ignals - ~~---" " Manager Onset-time finders Agents Onset-time vectorizere...,, From: Proceedings of the Second International Conference on Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved..,,,,,,,, or weak), and the current inter-beat interval. In making the hypothesis, the agent interacts with other agents to perform the beat-tracking task (the lirst requirement). All agents are grouped into pairs that have different strategies for beat tracking. Each agent in the pair examines the same interbeat interval using the same frequency-analysis results. To predict the next beat times cooperatively, one agent interacts with the other agent in the same pair through a prediction field. The prediction field is an expectancy curve- that represents when the next beat is expected to occur (Figure 3). The height of each local peak in the prediction tield can be interpreted as the next beat-position possibility. The two agents interact with each other by inhibiting the prediction field in the other agent. The beat time of each hypothesis inhibits the temporaily corresponding neighborhood in the other s field (Figure 3). This enables one agent to track the correct beats even if the other agent tracks the middle of the two successive correct beats (which compensates for one of the typical tracking errors). Each agent is able to evaluate its own hypothesis, using musical knowledge, according to the input acoustic signals (the second requirement). We call the quantitative result this self-evaluation the reliability of the hypothesis. The final beat-tracking result is determined on the basis of the most reliable hypothesis that is selected from the hypotheses of all agents. Each agent aiso adapts to the current input by adjusting its own strategy parameter (the third requirement). If the reliability of a hypothesis becomes high enough, the agent tunes a parameter to narrow the range of possible inter-beat intervals so that it examines only a neighborhood of the current appropriate one. This enables the agent to maintain the hypothesis that has the inter-beat intervai appropriate to the current input. System Description The 3 system for musical audio signals without drum-sounds assumes that the time-signature of an input song is 4/4 and that its tempo is constrained to be between 61 M.M. 2Oilier systems (Desain 1992; Desain & Honing 1994: Vercoe 1994) have used a similar concept of expectancy curve for pz~dieting future events, but not as a means for managing interaction among agents. 3A detailed description of our beat-tracking system for audio signals that include drura-sounds is presented in tgoto & Muraoka 1995a: 1995b).... "~ ~ --I~re(l~n~-----~~-~ ~._b~eeat Inf0rmdon~ ~. Nu uolwerslon ~.~ ~" ~ r S ~B..:. 14ke IMlyJll ~lleall :k r S Preolcll0fl ~.r.~...z..:.. ~ r ~ el41 lalllwpj//i ~" i Figure 4: Processing model. ~.~k;slcai ~;;di~o~i~nals-"i Time-slgnaturs: 414... ~... Tempo: 61-120 M.M. I ~ CorlversJon! l l~.t Fourier Tr.,form I Frequency Analysis - [Extracting onset colnponentsj i onset-time vectore i I Hl~her.level 6heckers Manager Beat -! BITransmlssion!... J.... i Prediction~ most reliable hypothesis (. Beat,n,orma~on "~i Be~t,iRma. Best type, Current tempo Figure 5: Overview of o~ beat tracking system. (M~ilzcl s Metronome: the number of quarter notes per minute) and 120 M.M., and is roughly constant. The emphasis in our system is on 1inding the temporal positions of quarter notes in audio signals rather than on tracking tempo changes. The system maintains, as the real-time output, a description cailed beat information (BI) that consists of the beat time, its beat type, and the current tempo. Figure 4 is a sketch of the processing model or our beat tracking system, and Figure 5 shows an overview of the system. The two main stages of processing are Frequem yanalysis, in which several cues used by agents are detected, and Beat Prediction, in which multiple hypotheses of beat positions are examined by multiple agents. Since accurate onset Goto lo5

From: Proceedings of the Second International Conference on Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved. times arc indispensable for tracking beats, in the Frequency In our current implementation, the input signal is digitized Analysis stage, the system uses multiple onset-time finders at 16bit/22.05kHz, and two kinds of FFT are calculated. ()ne that detect onset times in several different frequency ranges. FFT, for extracting onset components in the Frequency Analysis stage, is calculated with a window size of 1024 samples Those results are transformed into vectorial representation (called onset-time vectors) by several onset-t#ne vectorizers. (46.44 ms), and the window is shifted by 256 samples ( l 1.61 In the Beat Prediction stage, the system manages multiple ms). The frequency resolution is consequently 21.53 Hz and agents that, according to different strategies, make parallel the time resolution (l frame-time) is I 1.61 ms. The frametime is the unit of time used in our system, trod the term time hypotheses based on these onset-time vectors. Each agent lirst calculates the inter-beat interval and predicts the next in this paper is defined as the time measured in traits of the beat time; it then infers the beat type by communicating with frame-time. The other FF], tor examining chord changes in a higher-level checker (described later), and evaluates the reliability of its own hypothesis. The manager gathers all hydid down-sampled at 16bit/11.025kHz with a window size of the Beat Prediction stage, is simultaneously calcuhtted in attpotheses and then determines the final output on the basis 1024 samples (92.88 ms), and the window is shifted by 128 of the most reliable one. Finally, the system transmits BI to samples (11.61 ms). Tile frequency aa~d time resolution are other application programs via a computer network. consequently 10.77 Hz and 1 frame-time. The following describe the main stages of Frequency Extracting Onset Components Frequency components Analysis and Beat Prediction. whose power has been rapidly increasing :tre extracted as Frequency Analysis onset components. The onset components and their degree of onset (rapidity of increase in power) are obtained from the In the Frequeno, Analysis stage, the frequency spectrum and frequency spectrum by a process that takes into account the several sequences of n-dimensional onset-time vectors are power present in nearby time-frequency regions. More details on the method of extracting onset components can be obtained for later processing (Figure 6). The full frequency band is split into several frequency ranges, and each dimension of the onset-time vectors corresponds to a different fre- found in (Goto& Muraoka 1995a). quency range. This representation makes it possible to consider onset times of "all the frequency ranges at the same time. our current implementation) detect onset times in several dif- Onset-time Finders Multiple onset-time linders (seven in Each sequence of onset-time vectors is obtained using a different set of weights for frequency ranges. One sequence, t tw 5OOHz-I khz, 1-2 khz, 2-6 khz, and 6-11 khz). Each onset lerent frequency ranges (0-125 Hz. 125-250 Hz, 250-500 Hz, example, focuses on middle frequency ranges, and another time is given by the peak time fimnd by peak-picking in I)(t) sequence focuses on low frequency ranges. along the time axis, where D(I) ~"~.t d(t f). and,l (I, f l is Fast Fourier Transform (FFT) The frequency spectrum (the power spectrum) is calculated with the FFT using the Hanning window. Each time the FFT is applied to the digitized audio signal, the window is shifted to the next frame. the degree of onset of frequency f at time 1. Limiting the range of frequency for the summation of D(t) makes it possible to lind onset times in the different frequency ranges. Onset-time Vectorizers Each onset-time vcctorizcr transforms the results of all onset-time finders into sequences of onset-time vectors: the same onset times in all the frequency ranges are put together into a vector. In the current system. three vectorizers transform onset times from seven Jinders into three sequences of seven-dimensional onset-time vectors with the different sets of frequency weights (l\)cusing on all/low/middle frequency ranges). Ti)ese results arc sent to agents in the Beat Prediction stage Figure 6: An example of frequency spectrum and an onsettime vector sequence. Beat Predicthm Multiple agents interpret the sequences of onset-time vectors according to different strategies and maintain their own hypotheses. Musical knowledge is necessary to determinc the beat type (strong or weak) and to evaluate which hypothesis is best. For the audit) signals without drum-sounds, the system utilizes the following musical knowledge: 1. Sounds are likely to occur on beats. In other words, the correct beat times tend to coincide with onset times. 2. Chords are more likely to change at the beginning t)l" incasures than at other positions. 3. Chords are more likely to change on beats (quarter-notes) than on other positions between tw(~ successive correct beats. 106 ICMAS-96

From: Proceedings Onset-time of the Second, ~ International... Conference on Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved. vectodzers Parameter / Table 1: Initial settings of the strategy parameters. 1 I"g~n~I~ t~,j.... pair frequency auto- inter-beat initial -agent focus type correlation interval peak period range selection Agents.,.../..:::[:!/tii~:: ~;i~: : ]... f k.]~,.,z:: ~:"::~:. " -: Higher-level / rparameters ~ i check~r.s...~ / rre~uency ro~.s l~p. I i i i I r "l~ ~t~-b,~,,,,a,~m/ i ~...,. l I I [ ;,...:1 L ~e~.tp~ks,t,,~anj,..., V" --t J- -,... : Hypothesis v " Hypothems"... i Next beat time i Next b~ t~e i I tn~,r-beain~rwtj...b [ Zete~e~ in~r,~... : Figure 7: Relations between onset-time vectorizers, agents, and higher-level checkers. 1-I 1-2 type-all type-all 500 f.t. 500 f.t. 43-85 f.t. 43-85 f.t. prmaary secondary 2-1 type-all 1000 f.t. 43-85 f.t. prmaary 2-2 type-all 1000 f.t. 43-85 f.t. secondary 3-1 type-low 500 f.t. 43-85 f.t. prmaary 3-2 type-low 500 f.t. 43-85 f.t. secondary 4-1 type-low I(X)O f.t. 43-85 f.t. primary 4-2 type-low 1000 f.t. 43-85 f.t. secondary 5-1 type-middle 500 f.t. 43-85 f.t. prtmary 5-2 type-middle 50() f.t. 43-85 l.t. secondary 6-1 6-2 type-middle type-middle 1000 f.t. 1000 f.t. 43-85 f.t. 43-85 f.t. primary secondary ""Lt. " is the abbreviation of frame-time 11.61 ms). To utilize the second and third kinds of knowledge, each agent communicates with a corresponding higher-level checker, which is a module to provide higher-level information, such as the results of examining the possibility of chord changes according to the current hypothesis (Figure 7). The agent utilizes this information to determine the beat type and to evaluate the reliability of the hypothesis. Each agent has four parameters that determine its strategy for making the hypothesis (Figure 7), and the settings these parameters vary from agent to agent. The lirst parameter, frequency focus ~. pe, determines which vectorizer the agent receives onset-time vectors from. This value is chosen from among type-all, type-low, and type-middle, respectively corresponding to vectorizers focusing on all frequency ranges, low frequency ranges, and middle frequency ranges. The second parameter, autocorrelation period, determines the window size for calculating vectorial autocorrelation of the sequence of onset-time vectors to determine the interbeat interval. The greater this value, the older the onset-time itfformation considered. The third parameter, inter-beat interval range, controls the range of possible inter-beat intervals. As described later, this limits the range of selectit)g peak in the result of the vectorial autocorrelation. The fourth parameter, initial peak selection, takes a value of either primary or secondary. When the value is primary, the largest peak in the prediction field is initially selected, and the peak is considered as the next beat time; when the value is secondar).,, the second largest peak is selected. This helps to obtain a variety of hypotheses. In our current implementation there are twelve agents grouped into six agent-pairs, and twelve higher-level checkers corresponding to these agents. Initial settings of the strategy parameters are listed in Table 1. As explained in Section Multiple-agent ArcMtecture, the parameter inter-beat interval range is adjusted as the processing goes on. The following sections describe the formation and mat)- agement of hypotheses. First, each agent determines the inter-beat interval using autocorrelation; it then interacts with its paired agent through the prediction field that is formed using cross-correlation, and predicts the next beat time. Second, the agent communicates with the higher-level checker to infer the beat type and evaluates its own reliability. The checker examines possibilities of chord changes by analyzing the frequency spectrum on the basis of the current hypothesis received from the agent. Finally, the manager gathers all the hypotheses, and the most reliable one is considered as the output. Beat-predicting Agents In our formulation, beats are characterized by two properties: period (inter-beat interval) and phase. The phase of a beat is the beat position relative to a reference point, usually the previous beat time. We measure phase in radians; for a quarter-note beat, for example, an eighth-note displacement ct~rresponds to a phase-shift of r radians. Each agent first determines the current inter-beat interval (period) (Figure 8). The agent receives the sequence onset-time vectors and calculates their vectorial autocorrelation a. The windowed and normalized vectorial aumcorrelation tymction Ae(r) is dclined as AcCr) = ~[=,.-w E[=~-,..- (6(t~. (~(t)-a (t ~(t)) - t~:(~. r)).,,,(~.- - t~ where ~(t) is the n-dimensional onset-time vector at time c is the current time, and W is the strategy parameter autocorrelation period. The window function w(/)is given (l) l u, Ct) = 1.0-0.5~. (2) The inter-beat interval is given by the r with the maximum height in At(r) within the range limited by fl~e paranmter inter-beat interval range. To determine the beat phase, the agent then forms the prediction field (Figure 8). The prediction field is the result of calculating the cross-correlation function between the sequence of the onset-time vectors and the sequence of beat 4The paper (Vercoe 1994) also proposed using. ~ variant of autocorrelation for rhytlmaic analysis. Goto 107

From: Proceedings of Inter-beat the Second interval International ~ prediction Conference fieldon Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved. (by autocorrelatjon) (by cross-correlation) IkHz p.---,~-,. :..." "" "": "... i- : " " :: - : i. - : " : : : i-- - - : ~ "- - : ": e "~.- ;- :.- " ~ -- " --. : ~- ~-! " - ; "! q :. _~L =;" =- ---~1--, ~ =-~,= - ::,.--:- ~..-=. ~-I,p- : Quarter-note chord change I how coincide possibility. I i Strong I~ Weak I~ Strong I~ Weak I~ Strong Eighth-note chord change I possibility I i I ~ time ~ ~,. eighth-note displacement positions Figure 8: Predicting the next beat. times whose interval is the inter-beat interval. As mentioned in Section Multiple-agent Architecture, the two agcnls in the same pair interact with each other by inhibiting the prediction field in the other agent. Each local peak in the prediction field is considered as a possible beat phase. When the reliability of a hypothesis is low, the agent initially selecls the peak in the prediction tield according to the parameter initial peak selection, and then tries to pursue the pe a.k equivalent to the previously selected one. This calculation corresponds to evaluating all possibilities of the beat phase under the current inter-beat interval. The next beat time is thus predicted on the basis of the inter-beat interval and the current beat phase. The agent receives the two kinds of possibilities of chord changes, at the quarter-note level and at the eighth-note level, by conununicating with the higher-level checker. We call the former the quarter-note chord change possibi/i~." and the latter the eighth-note chord change possibility. The quarternote (eighth-note) chord change possibility represents how chord is likely to change on each quarter-note (eighth-note) position under the current hypothesis. To infer the beat type, we use the second kind of musical knowledge, width means that the quarter-note chord change possibility is higher on a strong beat than on a weak beat. If the quarter-note chord change possibility is high enough, its time is considered to indicate the position of the strong beat. The tbllowing beat type is then determined under the assumption that strong and weak beats alternate (Figure 8). The agent finally evaluates the reliability of its own hypothesis by using the first and third kinds of musical knowledge. According to the first kind, the reliability is determined according to how the next beat time predicted on the basis of the onset times coincides with the time extrapolated from the past two beat times (Figure 8). If they coincide, the reliability is increased; otherwise, the reliability is decreased. According to the third kind of knowledge, if the eighth-note chord change possibility is higher on beats than on eighthnote displacement positions, the reliability is increased; otherwise, the reliability is decreased. Higher-level Checkers For the audio signals without drum-sounds, each higher-level checker examines two kinds or chord change possibilities according to the hypotheses re- 01]Z~-, : : ~...L..: :... ~..r-...l-..dr.-l...~.l.~"...r--: fls F.s 1Os -- ", Time 0.0 ][ i l O~ "Li...i...[.i "i.i.ri. OS, ~s Ills., "[ in p.= ; ii,,,,,, i t I!,i] i (a) Examining quarter-note chord change possibility lki-tz.], z"-",~,~ ~"! {., ",, ": :"??[ [ - "[ " i" ii" ~ ~ -.~ ; ~ ]~:!l " Y ~, ---.;-PP" -~ ::_.,. --pt--",~ ::.: " ~.. -. -.~---: ["-: i _~;.- ~:.,-~.,;.~... ;-;-k~-~.."~. : ;-=-=--~- ~" : q :-- -.,-P~-P. r.; -.,-r :._L~..p,"-~=e... --P -..,-~.. : ;---P~-~ - -":: ::.-I - ~- ; ~ --P-: ---- --"-"-=.. -;---P:--: -- = ::- ". -~: : -::=:=-~-;~.~,_ :~..=:~...=~=--.--~;e=-.,-[ _=._-..=~=~.=. r.--.2_e~..-- F=.. -: i : -~--pp-?-y-,--r ~[-;--, ~--;--.-r,---~---: ---r -~,~--~ ~,~,~ IF-- : O~.5~ ills... ;, Time l ]i"[[li: 71[iT "r " i :: :... ~~!:.~... (Is,"is i(1~ - Till ll. (b) Examining eighth-mite chord change possibility Figure 9: Examples of peaks in sliced frequency spectrum and chord change possibility. ccived from the corresponding agent. The checker first slices the frequency spectrum into strips at the quarter-note times (beat times) ft~r examimng the quarter-note chord change possibility, and slices at the eighth-note times interpolated trom beat times for examining the eighth-note chord change possibility (Figure 9). The checker then finds peaks along the frequency axis in a histogram summed up along the tilne axis in each strip. These peaks can be considered as the dominant tones" pitches in each strip. Some peaks may be components of a chord, end others may be components of a melody. Our current implementation considers only peaks whose frequency is less than! khz. The checker evaluates the chord change possibilities by comparing these peaks between adjacent strips. The more and the louder the peaks occur compared with the prcvitins strip, the higher the chord change possibilities. For the quarter-note (eighth-note) chord change possibility, the checker con]pares the strips whose period corresponds to flic quarter-note (eighth-note) duration under the curre,i hypothesis. Figure 9 shows examples of two kinds ill chord oh:rage possibilities. The horizontal lines above represent peaks in each strip s histogram. The Olick vertical lines below represent the chord change possibility. The beginning t)f measure comes at every four quarter-notes from the extreme left in ta"), and the beat comes at every two eighth-notes lrt~ln the extreme left in (b). 108 ICMAS-96

From: Proceedings of the Second International Conference on Multiagent Systems. Hypotheses Manager The manager classifies all agentgenerated hypotheses into groups according to beat time and mined correctly, the system initially had trouble determining real time 5. Copyright 1996, AAAI (www.aaai.org). All rights reserved. In each song where the beat was eventually deter- inter-beat interval. Each group has an overall reliability the beat type, even though the beat time was correct. Within given by the sum of the reliabilities of the group s hypotheses. The manager then selects the dominant group that has ever, both the beat time and type had been determined cor- at most fifteen measures of the beginning of the song, how- the highest reliability. Since a wrong group could be selected rectly. In most of the mistaken songs, beat times were not if temporarily unstable beat times split the appropriate dominant group, the mauagerepeats grouping and selecting three tempt) fluctuated temporarily. In other songs, the beat type obtained correctly since onset times were very few or the times while narrowing the allowable margin of beat times was not determined correctly because of irregularity of chord for becoming the same group. The reliable hypothesis in the changes. most dominant group is thus selected as the output and sent These results show that the system is robust enough to deal to the BI Transmission stage. with re d-world musical signals. We have also developed an The manager updates the beat type in the output using only application with the system that displays a computer graphics dancer whose motion changes with musical beats in real the beat type that was labeled when the quarter-note chord change possibility was high compared with the recent maximum possibility. When the possibility was not high enough, that our system is also useful in multimedia applications in time (Goto & Muraoka 1994). This application has shown the updated beat type is determined from the previous reliable beat type based on the alternation of strong and weak which human-like hearing ability is desirable. beats. This enables the system to disregard an incorrect beat Discussion type that is caused by a local irregularity of chord changes. Our goal is to build a system that can understand musical audio signals in a human-like fashion. We believe that an Implementation on a Parallel Computer important initial step is to build a system which, even in its preliminary implementation, can deal with real-world acoustic signals like those sampled from compact discs. Most previous beat tracking systems had great difficulty working in real-world acoustic environments, however. Most of these systems (Dannenberg & Mont-Reynaud 1987; Desain & Honing 1989; Allen & Dannenberg 1990; Driesse 1991; Rosenthal 1992) have dealt with MIDI signals as their in- Parallel processing provides a practical and feasible solution to the problem of performing a computationally intensive task, such as processing and understanding complex audio signals, in real time. Our system has been implemented on a distributed-memod, parallel computer, the Fujitsu AP1000 that consists of 64 processing elements(ishihata et al. 1991). A different element or group of elements is assigned to each module, such as FFT, the onset-time finder, the onset-time vectorizer, the agent, the higher-level checker, and the manager. These modules run concurrently and communicate with others by passing messages between processing elements. We use four kinds of parallelizing techniques in order to execute the heterogeneous processes simultaneously (Goto & Muraoka 1996). The processes are first pipelined, and then each stage of the pipeline is implemented with data/control parallel processing, pipeline processing, and distributed cooperative processing. This implementation makes it possible to analyze audio signals in various ways and to manage multiple agents in real time. Experiments and Results We tested the system for audio without drum-sounds on 40 songs performed by 28 artists. The initial one or two minutes of those songs were used as the inputs. The inputs were monaural audio signals sampled from commercial compact discs of thepopular music genre. Their tempi ranged from 62 M.M. to 116 M.M. and were roughly constant. It is usually more difficult to track beats in songs without drum-sounds than in songs with drum-sounds, because they tend to have fewer sounds which fall on the beat and musical knowledge is difficult to apply in general. In our experiment, the system correctly tracked beats (i.e., obtained the beat time and type) in 34 out of the 40 songs in put. Since it is quite difficult to obtain complete MIDI representations from audit) data, MIDI-based systems are limited in their application. Although some systems (Schloss 1985; Katayose et al. 1989) dealt with audio signals, they had difficulty processing music played on ensembles containing a variety of instruments and did not work in real time. Our strategy of first building a real-time system that works in real-world complex environments and then upgrading the ability of the system is related to the scaling-up problem (Kitano 1993) in the domain of artiticial (Figure 10). As Hiroaki Kitano stated: intelligence experiences in expert systems, machine translation systems, and other knowledge-based systems indicate that 5Our other beat-tracking system tot audio signals that include drum-sounds, which is based on a similar multiple-agent architecture, correctly tracked beal~ in 42 out of the 44 songs that included drum-sounds(gore & Muraoka 1995b). Task complexity Scalability Toy,~ Intelligent system system t Systems that pay-off I ~- Useful system Domain size (closeness to the real-world} Figure 10: Scaling-up problem (Kitano 1993). Goto 109

From: Proceedings of the Second International Conference on Multiagent Systems. Copyright 1996, AAAI (www.aaai.org). All rights reserved. scaling up is extremely difficult for many of the prototypes. l)esain, P.. and Honing, H.! 989. The quantization t)f musical time: (Kitano 1993) A connectionist approach. Computer Music Journol 13(3):56-66. In other words, it is hard to scale-up a system whose preliminary implementation works only in laboratory environments. Desain, P., and Honing, H. 1994. Advanced issues in beat induction modeling: syncopation, tempo and timing. In Prec. if the 1994 hltl. We think that our strategy addresses this issue and that the Computer Mtt~ic Conf.. 92-94. application of multiple-agent architecture makes the system Desalt. P. 1992. Can computer music benelit from cognitive models of rhythm perception? In Pnw. of the 1992 Intl. Computer robust enough to work in real-world environments. Mr(sic Cotg, 42-45. Some researchers might regard as agents several modules in our system, such as the onset-time finders, the onset-time Driesse, A. 1991. Real-time tempo tracking using rules to :malyze rhythmic qualities. In Proc. if the 1991 Intl. (hmtputer Music Cm ll. vectorizers, the higher-level checkers, attd the manager. In 578-581. our terminology, however, we define the term agent as a software component of distributed artificial intelligence that satisfies the three requirements presented in Section Multiple- Goto, M.. aml Muraoka, Y. 1994. A beat tracking system fi)r acoustic signals of music. In Proc. t!f the Second ACM hztl. Confi on Multimedia. 365-372. agent Arcizitecture. We therefore do not consider those modules agents: they are simply concurrent objects. Goto, M., and Muraoka, Y. 1995a. Music undel~landing at the beat level - re d-time beat tracking for audio signals -. In Working Conclusion We have presented a multiple-agent architecture for beat Notes of the IJCAI-95 Workshop on Computational Auditor) So erie Analyds. 68-75. (;oto, M.. and Muraoka. Y. 1995b. A real-time beat tracking system tracking and have described the configuration and implementation for audio signals. In Pmc. of the 1995 bztl. Comptttc,r Music Ct,t~i. of our real-time beat tracking system. Our sys- 171-174. tem tracks beats in audio signals containing sounds of various instruments and reports beat information in time to input (loto. M.. and Mur. mka. Y. 1996. Parallel implement:riou of beat tracking system - real-time musical inlk~rmation processing on music. The experimental results show that the system is robust enough to handle real-world audio signals sampled frotn compact discs of popular music. The system manages multiple agents that track beats according to different strategies in order to examine multiple hypotheses in parallel. This enables the system to l ollow beats without losing track of them, even if some hypotheses become wrong. Each agent cat) interact with other agents to track beats cooperatively and can evaluate its own hypothesis according to musical knowledge. Each agent also can adapt to the current input by adjusting its own strategy. These abilities make it possible for the system to handle ambiguous situations by maintaining various hypotheses, and they make the system robust and stable. We plan to upgrade the system to make use of other higher-level musical structure, and to generalize to other musical genres. Future work will include application of the multiple-agent architecture to other perceptual problems and will also include a study of more sophisticated interaction among agents and more dynamic multiple-agent architecturc in which the total number of agents is not fixed. Acknowledgments We thank David Rosenthal for his helpful comments on earlier drafts of this paper. We also thank Fujitsu Laboratories Ltd. h)r use of the APIO(X). References Allen, P. E.. and l)annenberg. R. B. 1990. Tracking musical beats in real time. In Prec. of the 1990 Intl. C omptger Music Confi. 140-143. CACM. 1994. Special issue on intelligent agents. Communications o.fthe ACM 37(7): 18-147. l)annenberg, R. B.. and Mont-Reynaud, B. 1987. Following an improvisation in real time. In i)roc, of the 1987 Intl. Computer Music Conf., 241-248. A P 1000 - t hz Japanese). Transactions t f h!/b (marion Prt, cx ring Society of Japm, 37(7): 1460-1468. ICMAS. 1995. Proc.. First httl. Cont. on Multi-Agent Systems. The AAAI Press / The MIT Press. lshi.hata. H.; Hofie, T.: Inano, S.; Shimizu. T.; and Kato. S. 1991. An architecture of highly parallel computer AP1001). In lt J-E! o.. cific Rbn Conf. on Communications. Computers. Signal Pn,, "essitzg. 13-16. Katayose, H.; Kato, H.; Imai. M.: and Inokuchi, S. 1989. An approach to an artificial music expert. In Proc. (!f the 1989 Intl. Computer Mus& Con.[:. 139-146. Kitano. H. 1993. Challenges of massive pm allelism. In Prc~,c. t,.] IJCAI-93, 813-834. Large. E. W. 1995. Beat tracking with a nonlinear oscilhttcn. In Woddng Notes ~f the IJC, tl-95 Workshop on Art~/t, ial bzlelligence and Music. 24-31. Maes, P., ed. 1990. Designing Atttonomous AgettlS: Theory tuzd Practice from Biology to Engineering mul Back. The MIT Press. Minsky, M. 1986. The Society of Mind. Simon & Schuster. Inc. Nakatani, T.: Okuno. H. G.: and Kawabata. T. 1994. Auditory stream seg/ egation in auditory scene analysis. In Proc. (,./A.4AI-94. 100-1 (17. Rosenthal. D.; Goto. M.; and Muraoka. Y. 1994. Rhythm tracking using multiple hypotheses. In Proc. ~" the 1994 Intl. Computer Music Conf.. 85-87. Rosenthal. I). 1992. Mm hinc Rhxthut: ( olnpttter lz.mulation of Human Rhythm Perception. l)h.d, i)isserlalion. Ma,;sachusetts Institute of Technology. Schloss. W. A. 1985. On The Atttomtttic" l rans,riptiof (?/l ercussire Music - From Ac uttsti," S(gnal to High-lx, rd Analysis. Ph.I ). Dissertation, CCRMA. Stanford University. Shoham. Y. 1993. Agent-oriented pta)gramming. Art(!i(ial h~tclli. gence 60( 1):51-92. Vercoe, B. 1994. Perceptually-based music pattern recognititm mid response. In l)roc, of the "l hild Intl. C ol!ll fi,r tit(" I)t rc t ptimz Cognition of Music, 59-60. 110 ICMAS-96