ON THE USE OF PERCEPTUAL PROPERTIES FOR MELODY ESTIMATION

Similar documents
Transcription of the Singing Melody in Polyphonic Music

SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION

Efficient Vocal Melody Extraction from Polyphonic Music Signals

Automatic music transcription

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

A COMPARISON OF MELODY EXTRACTION METHODS BASED ON SOURCE-FILTER MODELLING

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION

MELODY EXTRACTION FROM POLYPHONIC AUDIO OF WESTERN OPERA: A METHOD BASED ON DETECTION OF THE SINGER S FORMANT

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

ON FINDING MELODIC LINES IN AUDIO RECORDINGS. Matija Marolt

MELODY EXTRACTION BASED ON HARMONIC CODED STRUCTURE

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity

A CHROMA-BASED SALIENCE FUNCTION FOR MELODY AND BASS LINE ESTIMATION FROM MUSIC AUDIO SIGNALS

Krzysztof Rychlicki-Kicior, Bartlomiej Stasiak and Mykhaylo Yatsymirskyy Lodz University of Technology

Topic 10. Multi-pitch Analysis

AN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS

CURRENT CHALLENGES IN THE EVALUATION OF PREDOMINANT MELODY EXTRACTION ALGORITHMS

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

THE importance of music content analysis for musical

Music Radar: A Web-based Query by Humming System

AN ACOUSTIC-PHONETIC APPROACH TO VOCAL MELODY EXTRACTION

Robert Alexandru Dobre, Cristian Negrescu

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING

POLYPHONIC TRANSCRIPTION BASED ON TEMPORAL EVOLUTION OF SPECTRAL SIMILARITY OF GAUSSIAN MIXTURE MODELS

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

Automatic Piano Music Transcription

Query By Humming: Finding Songs in a Polyphonic Database

Music Segmentation Using Markov Chain Methods

A probabilistic framework for audio-based tonal key and chord recognition

IMPROVING GENRE CLASSIFICATION BY COMBINATION OF AUDIO AND SYMBOLIC DESCRIPTORS USING A TRANSCRIPTION SYSTEM

Piano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15

Statistical Modeling and Retrieval of Polyphonic Music

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

Tempo and Beat Analysis

Addressing user satisfaction in melody extraction

EVALUATION OF MULTIPLE-F0 ESTIMATION AND TRACKING SYSTEMS

NOTE-LEVEL MUSIC TRANSCRIPTION BY MAXIMUM LIKELIHOOD SAMPLING

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

A prototype system for rule-based expressive modifications of audio recordings

Semi-supervised Musical Instrument Recognition

Melody, Bass Line, and Harmony Representations for Music Version Identification

Chroma-based Predominant Melody and Bass Line Extraction from Music Audio Signals

Singing Pitch Extraction and Singing Voice Separation

Proc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music

Effects of acoustic degradations on cover song recognition

TIMBRE-CONSTRAINED RECURSIVE TIME-VARYING ANALYSIS FOR MUSICAL NOTE SEPARATION

/$ IEEE

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

SINGING VOICE MELODY TRANSCRIPTION USING DEEP NEURAL NETWORKS

Analysis, Synthesis, and Perception of Musical Sounds

Topic 4. Single Pitch Detection

Subjective Similarity of Music: Data Collection for Individuality Analysis

The song remains the same: identifying versions of the same piece using tonal descriptors

CONTENT-BASED MELODIC TRANSFORMATIONS OF AUDIO MATERIAL FOR A MUSIC PROCESSING APPLICATION

Classification-based melody transcription

Voice & Music Pattern Extraction: A Review

Tempo and Beat Tracking

Appendix A Types of Recorded Chords

TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM)

Onset Detection and Music Transcription for the Irish Tin Whistle

ACCURATE ANALYSIS AND VISUAL FEEDBACK OF VIBRATO IN SINGING. University of Porto - Faculty of Engineering -DEEC Porto, Portugal

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

A Quantitative Comparison of Different Approaches for Melody Extraction from Polyphonic Audio Recordings

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES

Classification-Based Melody Transcription

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION

UNIVERSITY OF DUBLIN TRINITY COLLEGE

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Normalized Cumulative Spectral Distribution in Music

REpeating Pattern Extraction Technique (REPET): A Simple Method for Music/Voice Separation

Extracting Information from Music Audio

Hidden melody in music playing motion: Music recording using optical motion tracking system

CLASSIFICATION OF MUSICAL METRE WITH AUTOCORRELATION AND DISCRIMINANT FUNCTIONS

Automatic Construction of Synthetic Musical Instruments and Performers

SYNTHESIZED POLYPHONIC MUSIC DATABASE WITH VERIFIABLE GROUND TRUTH FOR MULTIPLE F0 ESTIMATION

Chroma Binary Similarity and Local Alignment Applied to Cover Song Identification

Psychoacoustics. lecturer:

AUTOM AT I C DRUM SOUND DE SCRI PT I ON FOR RE AL - WORL D M USI C USING TEMPLATE ADAPTATION AND MATCHING METHODS

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra

Music Structure Analysis

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

SINGING VOICE ANALYSIS AND EDITING BASED ON MUTUALLY DEPENDENT F0 ESTIMATION AND SOURCE SEPARATION

Music Information Retrieval with Temporal Features and Timbre

User-Specific Learning for Recognizing a Singer s Intended Pitch

HUMANS have a remarkable ability to recognize objects

ONLINE ACTIVITIES FOR MUSIC INFORMATION AND ACOUSTICS EDUCATION AND PSYCHOACOUSTIC DATA COLLECTION

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Concert halls conveyors of musical expressions

PERCEPTUALLY-BASED EVALUATION OF THE ERRORS USUALLY MADE WHEN AUTOMATICALLY TRANSCRIBING MUSIC

LISTENERS respond to a wealth of information in music

Data-Driven Solo Voice Enhancement for Jazz Music Retrieval

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

AUTOMATIC ACCOMPANIMENT OF VOCAL MELODIES IN THE CONTEXT OF POPULAR MUSIC

Supervised Musical Source Separation from Mono and Stereo Mixtures based on Sinusoidal Modeling

NEURAL NETWORKS FOR SUPERVISED PITCH TRACKING IN NOISE. Kun Han and DeLiang Wang

Outline. Why do we classify? Audio Classification

Transcription:

Proc. of the 4 th Int. Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Proc. of the 4th International Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 ON THE USE OF PERCEPTUAL PROPERTIES FOR MELODY ESTIMATION Wei-Hsiang Liao and Alvin W. Y. Su Dep. of Computer Science and Information Engineering National Cheng-Kung University, Tainan, Taiwan whsng.liao@gmail.com, alvinsu@mail.ncku.edu.tw Chunghsin Yeh and Axel Roebel Analysis/Synthesis team IRCAM/CNRS-STMS Paris, France cyeh@ircam.fr, roebel@ircam.fr ABSTRACT This paper is about the use of perceptual principles for melody estimation. The melody stream is understood as generated by the most dominant source. Since the source with the strongest energy may not be perceptually the most dominant one, it is proposed to study the perceptual properties for melody estimation: loudness, masking effect and timbre similarity. The related criteria are integrated into a melody estimation system and their respective contributions are evaluated. The effectiveness of these perceptual criteria is confirmed by the evaluation results using more than one hundred excerpts of music recordings.. INTRODUCTION Auditory scene analysis of music signals is an ongoing active research in recent years as encouraging results continue to explore various applications in the field of digital audio effects (DAFx) and music information retrieval (MIR) []. Among the harmonic sources present in the music scene, the melody source usually forms perceptually and musically the most dominant stream [2] [3] [4]. The problem of melody estimation is difficult because it requires not only low-level information about sound signals but also high-level information about perception of music. In this article, we define the melody estimation problem as the estimation of the fundamental frequency (F) of the most dominant source stream. Since the source with the strongest energy may not be perceptually the most dominant one, our study will make use of perceptual properties and evaluate their effectiveness. In addition to the perceptual grouping cues of harmonic sounds in auditory scene analysis [5], many of the existing methods for melody estimation further make use of other perceptual properties such as loudness [6] [7], masking [8], timbre similarity[6][9] [] and auditory filters [3] [] [2]. If one looks at the evaluation results of the MIREX (Music Information Retrieval Evaluation exchange) campaign for the Audio Melody Estimation task, the systems that make use of these perceptual properties seem to show certain advantages in performance. In fact, the perceptuallymotivated system proposed by Dressler [3, 4, 9, 5] always ranks the top [6]. Although important details of perceptual criteria are missing in her descriptions, it is nevertheless reasonable toassume that the key problem of melody estimation is related to perceptual criteria. In this study, we propose to evaluate the following perceptual criteria: loudness, masking, and timbre similarity within the proposed melody estimation system. The auditory filters and other multi-resolution analysis methods are not explored here because we believe that the melody source stream is usually significantly present in the mid-frequency range and a fixed resolution of STFT(short-time Fourier transform) can thus be sufficiently adapted. The proposed system consists mainly of two parts: candidate selection and tracking. As the salience of an F candidate is derived from the the dominant peaks that are harmonically matched, we propose to compare perceptually-motivated criteria with lowlevel signal features for dominant peak selection. Similarly, candidate scoring based on perceptual criteria is also evaluated to reveal how a correct candidate can be more favored than others. Based on the algorithm previously proposed in [7], a tracking algorithm dedicated to melody estimation is developed to determine the coherent source stream with an optimal trade-off among candidate score, smoothness of frequency trajectory and spectral envelope similarity. The paper is organized as follows: In Section 2, we present the methods for dominant peak selection and candidate scoring. In Section 3, the components of the tracking system is detailed. In Section 4, the effectiveness of the perceptual criteria are evaluated and the performance of the proposed system is compared to the state-of-the-art systems. Finally, conclusions are drawn and future works are proposed. 2. CANDIDATE EXTRACTION Extraction of compact F candidates from polyphonic signals is not an easy task because concurrent sources interfere with each other and spectral components from different sources may form reasonable F hypotheses [8]. Although a proper multiple-f estimation allows proper treatment of overlapping partials, asimpler scheme shall meet our needs for melody estimation. Under the assumption that the melody stream is generated by the most dominant source, the interference from other sources has less impact on its spectral components. The remaining problem is then to avoid extracting subharmonic F candidates that are supported by the combination of spectral components from different sources. They appear to be very competitive to the correct F and are very likely to cause octave errors. Since the target source is assumed to be dominant, its harmonic components should be present as dominant spectral peaks. By means of selecting the dominant peaks, we can avoid excessive spurious candidates and efficiently establish a compact set of F hypotheses with reliable salience. 2.. Peak Selection We propose four peak selection methods. The first two are based on loudness weighting and masking effects respectively to select perceptually dominant peaks, and the other two are based on cepstral envelope and noise envelope respectively to select energy dominant peaks. DAFX- DAFx-4

Proc. of the 4 th Int. Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Proc. of the 4th International Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Select by Loudness It is known that the relative energy of the spectral components one measures is very different from the relative loudness one perceives [9]. Since calculating the loudness for complex sound is not straightforward, a common approach is to apply proper spectral weighting by a selected equal-loudness contour to imitate the perceptual dominance of spectral components. Accordingly, we weight the spectrum X with a frequency dependent equal-loudness curve L to obtain the loudness spectrum X L: X L(k) = X(k) L(k) ; () where k is the frequency bin. We choose the equal-loudness curve proposed by Fletcher and Munson [2] measuring at db SPL (sound pressure level) for L: 2 log L(k) =3.64 f.8 k 6.5 e.6 (f k 3.3) 2 +( 3 ) f 4 k where the frequency f k in khz is converted from the respective frequency bin k. Then, we select the peaks that are not smaller than δ LdB of the maximum of X L (see Fig. (a)). Select by Masking Curve The masking effect depicts how a tone can mask its neighboring components across critical bands, which can be represented by the spreading function (on db scale) [2] (2) S f (i, j) =5.8 + 7.5((i j)+.474) 7.5( + ((i j)+.474) 2 ).5 (3) where i is the bark frequency of the masking signal, and j is the bark frequency of the masked signal. The formula of converting frequency f k from khz to the bark scale is [22]: B(f k )=3 arctan(.76 f k )+3.5 arctan( f k 7.5 )2 (4) The strength of masking of a peak is not only determined by the magnitude of the peak, but also related to its being tonal or noisy. WefollowtheMPEG sstandardtoclassifyapeak[23]:if apeakis7dbhigherthanitsneighboringcomponent,itisconsidered tonal. Otherwise, it is considered noisy. Accordingly, the mask contributed by a peak is thus (on db scale): M(i, j) =S f (i, j) (4.5+i) α 5.5 ( α) (tonal : α =,noisy : α =) By means of selecting the maximal mask overlaying at each bin, the masking curve X m is constructed: (5) 2 log X m(k) =max{m(i, B(f k ))}, i I (6) where I is the set of all peaks. The peaks which are larger than the masking curve are selected (see Fig. (b)). 4 (a) 2 2 4 (b) 2 2 4 (c) 2 2 4 (d) 2 2 loudness spectrum 2 4 6 8 2 4 6 8 2 22 masking curve 2 4 6 8 2 4 6 8 2 22 cepstral envelope 2 4 6 8 2 4 6 8 2 22 noise envelope 2 4 6 8 2 4 6 8 2 22 Frequency(Hz) Figure : Dominant peak selection by (a) loudness spectrum, (b) masking curve, (c) cesptral envelope, and (d) noise envelope. The original spectrum is plotted as thin solid line and the selected peaks are marked by crosses. The y-axis is the log-amplitude in db. Select by Cepstral Envelope The cepstral envelope is an approximation of the expected logamplitude of the spectrum [24]. That is, it is a frequency-dependent curve that passes through the mean log-amplitudes at respective frequencies. Accordingly, it is reasonable to assume that the spectral peaks of the most dominant source lie above the cepstral envelope (see Fig. (c)). An optional raise of δ C db can be used to prevent selection of noise peaks. Select by Noise Envelope For the case of polyphonic signals, the cepstral envelope may not give reasonable estimation due to dense distribution of sinusoidal peaks. Besides, it allows some noise peaks to be selected because it passes through the mean of the noise peaks. A solution to these problems is the use of the noise envelope which is the raise of the mean noise level [8]. The proposed noise level estimation makes use of the Rayleigh distribution to model the spectral magnitude distribution of noise and is adaptive in frequency [25]. We raise the mean noise level by δ NdB as the noise envelope to select dominant peaks (see Fig. (d)). 2.2. Candidate Generation and Scoring Harris suggested locating all groups of pitch harmonics by means of identifying equally spaced spectral peaks on which the salience of a group is built [26]. This method belongs to the spectral interval type F estimators [27]. For polyphonic signals, however, partials belonging to different sources may form a group of harmonics which results in subharmonic Fs. One way to avoid generating subharmonic F candidates is to cast further constraints on the spectral location of each partial. Similar to the inter-peak DAFX-2 DAFx-42

Proc. of the 4 th Int. Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Proc. of the 4th International Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 beating method proposed in [8], we present a method for generating F candidates from the selected dominant peaks. First, the F hypotheses are generated by collecting the spectral intervals between any pair of dominant peaks in the spectrum. Then, the spectral location principle is applied: If the generated hypothesis is not harmonically related to the peaks that support its spectral interval, it is not considered a reasonable candidate. Due to the overlapping partials, frequencies of the peaks are not sufficiently precise. Thus, a semitone tolerance is allowed for the harmonic matching. In order to reflect the perceptual dominance of a candidate, we propose to score F candidates based on the loudness spectrum X L (eq. ): the score of a candidate is the summation of the first H = partials in the loudness spectrum. The contribution of a partial is determined by the harmonically matched peak with the largest loudness nearby. The partials not selected as dominant peaks will not contribute to the score. 3. TRACKING BY DYNAMIC PROGRAMMING Given a sequence of candidates extracted from the spectrogram, we adapt the tracking algorithm proposed in [7] to decode the melody stream. Since the melody stream may not be always the most dominant source at each short-time instant, decoding with the maximal score will not yield the optimal result. Therefore, we propose to integrate an additional criterion, spectral envelope similarity, into the dynamic programming scheme. Following[7], we describe the problem using the hidden Markov model (HMM): Hidden state: true melody F Observation: loudness spectrogram Emission probability: normalized candidate score Transition probability trajectory smoothness: the frequency difference between two connected F candidates spectral envelope similarity: the spectral envelope difference between two connected candidates Compared with the previous method, two novelties are introduced in the transition probability. One is the probability distribution of the melody F difference between frames for evaluating the trajectory smoothness. Learned from the ADC4 training database, the distribution is approximated by the Laplace distribution (see Fig. 2). The trajectory smoothness is then modeled by F (c n,c m)= 2b exp( fcn f cm b f cm ),b =.77889 (7) where c n,c m represent the two candidates with frequencies f cn,f cm. Notice that c n,c m may be located at different analysis frames and the distance allowed for connection is three frames. The other novelty is the integration of the spectral envelope similarity in the transition probability. This is intended to favor candidate connection with similar timbre such that the decoded stream is locked to the same source even when it becomes less dominant (smaller score). H h= A(c n,c m)= XL(tn,hfcn ) X L(t m,hf cm ) 2 H (8) h= XL(tm,hfcm ) 2..9.8.7.6.5.4.3.2. 5 5 (a)..9.8.7.6.5.4.3.2. 5 5 (b) Figure 2: (a) The probability distribution of frequency deviation from ADC4 database (b) The probability density function modeled by the Laplace distribution. The x-axis is the frequency deviation in percentage. where t n,t m denotes the frames where c n,c m are extracted. The transition probability is thus given by T (c n,c m)=f (c n,c m)a(c n,c m) γ (9) where γ is a compression parameter which should reflect the importance of the envelope similarity measure. In order to obtain the optimal trade-off between the emission probability (score) andthe transition probability, we further apply a compression factor β on the emission probability. The connection weight between two nodes is defined by the product of the emission probability and the transition probability, from which the forward propagated weights can be accumulated. The optimal path (melody stream) is then decoded by backward tracking through the nodes of locally maximal weights. 4. EVALUATION In this section, we present the evaluation of the effectiveness of the perceptual criteria. Firstly, the different peak selection methods are evaluated. Then, the system with/without perceptual criteria is evaluated. Finally, the performance is compared with that of MIREX participants. The databases used are listed below: ADC4: 2 excerpts of about 2s including MIDI, Jazz, Pop and Opera music as well as audio pieces with a synthesized voice. It is used for our training database [28]. MIREX5: 25 excerpts of -4s from the following genres: Rock, R&B, Pop, Jazz, Solo classical piano [29]. Only 3 excerpts are made publicly available. RWC: excerpts, 8 from Japanese hit charts in the 99s and 2 from American hit charts in the 98s [3]. This large database is rarely used in existing publications on melody estimation. Peak selection To evaluate the performance of different peak selection methods, we use two metrics: recall rate and mean rank. Recallrateisthe DAFX-3 DAFx-43

Proc. of the 4 th Int. Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Proc. of the 4th International Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 percentage of the correct melody F being extracted in the candidate set. A good peak selection method shall not exclude too many peaks that support the correct F. Mean rank is the average score ranking of the correct melody F in the candidate set. As long as the dominant partials of the correct F are selected, the resulting score shall be high and the ranking of the correct F be on top. For the methods implying thresholds, several values are tested in search of the best configuration. The result is shown in Fig. 3. A good configuration shall result in a point located more to the topright corner in the figure. The reasonable results obtained seem to locate in the region of which recall rate varies from.85 to.9 and mean rank varies from 2 to. In general, the perceptual criteria seem to be more effective than the spectral envelopes in favoring the correct Fs..8.6.4.2.8.6.4.2 2 3 4 5 6 7 8 9 2 3 4 5 6 7 (a) ADC24 2 3 4 5 6 7 8 9 2 3 4 5 6 7 (b) MIREX5 mean rank.5 2 2.5 3 3.5 4 loudness spectrum masking curve 4.5 cepstral envelope noise envelope 5.75.8.85.9.95 recall rate Figure 3: Evaluation results of different peak selection methods. The parameters tested are δ L:(48,36,24,2), δ C:(8,2,6,) and δ N :(2,9,6,3,). The masking curve method does not involve any parameter and is shown as a single point. System configurations To understand the contribution of each component in the system, we propose to evaluate the system with different configurations. Since our current system does not detect if the melody is present (voiced) or not (unvoiced), we choose the following evaluation metrics [4] Raw Pitch Accuracy = number of correct estimates number of ground truth () which is defined as the proportion of the voiced frames in which the estimated F is within one semitone of the ground truth. The baseline configuration does not take into account any perceptual properties. The peak selection simply picks the first 2 largest peaks and the tracking does not use the envelope similarity measure (γ =). The perceptual configuration uses the loudness spectrum for peak selection, the envelope similarity compression factor γ = 2.4 and the emission probability compression factor β =.. TheseparametersaretrainedfromthedatasetADC4. For each configuration, we further evaluate how the tracking mechanism improves the average raw pitch accuracy. The results without tracking simply reports the best candidate at each frame. The Figure 4: Raw pitch accuracy comparisons: (a) The MIREX participant results for ADC4 database (b) The MIREX participant results for MIREX5 database. The indices corresponding to MIREX participant IDs are: the first five for MIREX 2 (HJ, TOOS, JJY2, JJY, SG) and the remaining twelve for MIREX 29 (CL, CL2, DR, DR2, HJC, HJC2, JJY, KD, MW, PC, RR, TOOS). Please refer to MIREX website for the respective systems [6]. The horizontal line shows the results of the proposed system. comparison is shown in Table. It is found that the perceptual configuration performs better than the baseline configuration by about 3 to 4%. The tracking mechanism slightly improve about to2%. Furtherinvestigationisongoingtoimprovethetracking algorithm. best candidate candidates + tracking Baseline config. 73.6% 74.3% Perceptual config. 76.92% 78.% Table : Average raw pitch accuracy for baseline configuration(without perceptual properties) and perceptual configuration. For each configuration, the frame-based estimation (reporting the best candidate) is evaluated against the tracking system. Comparison with the state-of-the-art system Thanks to the MIREX campaign, the performance of the start-ofthe-art systems are publicly evaluated (see Fig. 4). Although the MIREX database is only partially available for our evaluation, the results (see Table 2) still demonstrate its competitive performance among the top-ranked systems. ADC4 MIREX5 RWC 8.53% 79.% 74.49% Table 2: Average raw pitch accuracy of proposed system evaluated on three databases. DAFX-4 DAFx-44

Proc. of the 4 th Int. Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 Proc. of the 4th International Conference on Digital Audio Effects (DAFx-), Paris, France, September 9-23, 2 5. CONCLUSION The effectiveness of perceptual properties in the context ofmelody estimation has been studied. For the proposed melody estimation system, the accuracy is improved by more than 3% while taking into account perceptual properties. The use of either loudness or masking curve demonstrates advantages over the proposed spectral envelope features. The envelope similarity is found to slightly improve the accuracy, too. The proposed system is evaluated on more than one hundred excerpts of music recordings and demonstrates its competitive performance to the state-of-the-art systems. Future work will be the improvement of the tracking algorithmand the development of the voicing detection algorithm. 6. REFERENCES [] A. Klapuri and M. Davy, Eds., Signal Processing Methods for Music Transcription, Springer,NewYork,26. [2] M. Goto, A real-time music-scene-description system: predominant-f estimation for detecting melody and bass lines in real-world audio signals, Speech Communication (ISCA Journal), vol.43,no.4,24. [3] R. P. Paiva, T. Mendes, and A. Cardoso, Melody detection in polyphonic musical signals: exploiting perceptual rules, note salience, and melodic smoothness, Computer Music Journal, vol.3,no.4,pp.8 98,26. [4] G. E. Poliner, D. P.W. Ellis, A. F. Ehmann, E. Gómez, S. Streich, and B. Ong, Melody transcription from music audio: approaches and evaluation, IEEE Trans. on Audio, Speech, and Language Processing, vol.5,no.4,pp.247 256, 27. [5] A. S. Bregman, Auditory Scene Analysis, The MIT Press, Cambridge, Massachusetts, 99. [6] M. Marolt, Audio melody extraction based on timbral similarity of melodic fragments, in Proc. of Eurocon 25,25. [7] J. Salamon and E. Gómez, Melody extraction from polyphonic music audio, Music Information Retrieval Evaluation exchange (MIREX) 2. [8] M. Marolt, On finding melodic lines in audio recordings, in Proc. of the Intl. Conf. on Digital Audio Effects (DAFx- 4), 24,pp.27 22. [9] K. Dressler, Audio melody extraction for MIREX 29, in 5th Music Information Retrieval Evaluation exchange (MIREX 9),29. [] J.-L. Durrieu, G. Richard, B. David, and C. Févotte, Source/filter model for unsupervised main melody extraction from polyphonic audio signals, IEEE Trans. on Audio, Speech, and Language Processing, vol.8,no.3,pp.564 575, 2. [] M. Ryynänen and A. Klapuri, Transcription of the singing melody in polyphonic music, in Proc. of the 7th Intl. Conf. on Music Information Retrieval (ISMIR 6), 26. [2] Y. Li and D.L. Wang, Separation of singing voice from music accompaniment for monaural recordings, IEEE Trans. on Audio, Speech, and Language Processing, vol.5,no.4, pp. 475 487, 27. [3] K. Dressler, Extraction of the melody pitch contour from polyphonic audio, in st Music Information Retrieval Evaluation exchange (MIREX 5),25. [4] K. Dressler, An auditory streaming approach on melody extraction, in 2nd Music Information Retrieval Evaluation exchange (MIREX 6),26. [5] K. Dressler, Audio melody extraction - late breaking at IS- MIR 2, in th Intl. Conf. on Music Information Retrieval (ISMIR ), 2. [6] Music Information Retrieval Evaluation exchange (MIREX) homepage, http://www.music-ir.org/mirex/wiki/. [7] W.-C. Chang, W.-Y. Su, C. Yeh, A. Roebel, and X. Rodet, Multiple-f tracking based on a high-order HMM model, in Proc. of the th Intl. Conf. on Digital Audio Effects (DAFx-8), Espoo, Finland, 28. [8] C. Yeh, Multiple fundamental frequency estimation of polyphonic recordings, Ph.D.thesis,UniversitéParis6,28. [9] B. Bauer and E. Torick, Researches in loudness measurement, IEEE Trans. on Audio and Electroacoustics, vol. 4, no. 3, pp. 4 5, 966. [2] H. Fletcher and W.A. Munson, Loudness, its definition, measurement and calculation., Journal of the Acoustic Society of America,vol.5,pp.82 8,933. [2] J. D. Johnston, Transform coding of audio signals using perceptual noise criteria, IEEE Journal on Selected Areas in Communications, vol.6,pp.34 323,988. [22] E. Zwicker, Subdivision of the audible frequency rangeinto critical bands, Journal of the Acoustic Society of America, vol. 33, no. 2, pp. 248 248, 933. [23] ISO/IEC 388-3, Information technology generic coding of moving pictures and associated audio information part 3: Audio, Tech. Rep., ISO/IEC JTC/SC29 WG, 998. [24] D. Schwarz and X. Rodet, Analysis, Synthesis, and Perception of Musical Sounds, chapterspectralenvelopesandadditive + residual analysis/synthesis, pp. 75 227, Springer Science+Business Media, LLC, NY, USA, 27. [25] C. Yeh and A. Roebel, Multipl-f estimation for MIREX 2, Music Information Retrieval Evaluation exchange (MIREX) 2. [26] C. M. Harris, Pitch extraction by computer processing of high-resolution Fourier analysis data, Journal of the Acoustical Society of America, vol.35,pp.339 343,March963. [27] A. Klapuri, Signal Processing Methods For the Automatic Transcription of Music, Ph.D.thesis,TampereUniversityof Technology, 24. [28] P. Cano, E. Gómez, F. Gouyon, P. Herrera, M. Koppenberger, B. Ong, X. Serra, S. Streich, and N. Wack, ISMIR 24 audio description contest, Tech. Rep., UPF MTG, 24. [29] G. Poliner and D. Ellis, A classification approach to melody transcription, in th Intl. Conf. on Music Information Retrieval (ISMIR 5), 25. [3] M. Goto, AIST annotation for the RWC Music Database, in Proc. of the 7th Intl. Conf. on Music Information Retrieval (ISMIR 6), 26,pp.359 36. DAFX-5 DAFx-45