Expanded Repeating Pattern Extraction Technique (REPET) With LPC Method for Music/Voice Separation

Similar documents
Repeating Pattern Extraction Technique (REPET): A Simple Method for Music/Voice Separation

REpeating Pattern Extraction Technique (REPET): A Simple Method for Music/Voice Separation

Voice & Music Pattern Extraction: A Review

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

Repeating Pattern Extraction Technique(REPET);A method for music/voice separation.

Efficient Vocal Melody Extraction from Polyphonic Music Signals

Automatic Rhythmic Notation from Single Voice Audio Sources

COMBINING MODELING OF SINGING VOICE AND BACKGROUND MUSIC FOR AUTOMATIC SEPARATION OF MUSICAL MIXTURES

Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Descriptors

Krzysztof Rychlicki-Kicior, Bartlomiej Stasiak and Mykhaylo Yatsymirskyy Lodz University of Technology

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

A Survey on: Sound Source Separation Methods

Chord Classification of an Audio Signal using Artificial Neural Network

Automatic Piano Music Transcription

Singer Identification

Singing Pitch Extraction and Singing Voice Separation

Music Radar: A Web-based Query by Humming System

Automatic Construction of Synthetic Musical Instruments and Performers

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

Combining Rhythm-Based and Pitch-Based Methods for Background and Melody Separation

A repetition-based framework for lyric alignment in popular songs

Music Source Separation

Music Structure Analysis

Transcription of the Singing Melody in Polyphonic Music

Music Similarity and Cover Song Identification: The Case of Jazz

Tempo and Beat Analysis

Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity

SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION

THE importance of music content analysis for musical

Methods for the automatic structural analysis of music. Jordan B. L. Smith CIRMMT Workshop on Structural Analysis of Music 26 March 2010

A COMPARISON OF MELODY EXTRACTION METHODS BASED ON SOURCE-FILTER MODELLING

Singer Traits Identification using Deep Neural Network

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

MODELS of music begin with a representation of the

The song remains the same: identifying versions of the same piece using tonal descriptors

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

Computational Modelling of Harmony

Music Genre Classification and Variance Comparison on Number of Genres

Music Information Retrieval with Temporal Features and Timbre

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

Automatic Laughter Detection

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Recognising Cello Performers using Timbre Models

Music Information Retrieval

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller)

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Acoustic Scene Classification

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

Gaussian Mixture Model for Singing Voice Separation from Stereophonic Music

USING VOICE SUPPRESSION ALGORITHMS TO IMPROVE BEAT TRACKING IN THE PRESENCE OF HIGHLY PREDOMINANT VOCALS. Jose R. Zapata and Emilia Gomez

Classification of Timbre Similarity

MELODY EXTRACTION FROM POLYPHONIC AUDIO OF WESTERN OPERA: A METHOD BASED ON DETECTION OF THE SINGER S FORMANT

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

Subjective Similarity of Music: Data Collection for Individuality Analysis

ON FINDING MELODIC LINES IN AUDIO RECORDINGS. Matija Marolt

CURRENT CHALLENGES IN THE EVALUATION OF PREDOMINANT MELODY EXTRACTION ALGORITHMS

Normalized Cumulative Spectral Distribution in Music

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Analysing Musical Pieces Using harmony-analyser.org Tools

Music Structure Analysis

EVALUATION OF A SCORE-INFORMED SOURCE SEPARATION SYSTEM

MELODY EXTRACTION BASED ON HARMONIC CODED STRUCTURE

POLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING

Audio Structure Analysis

Robert Alexandru Dobre, Cristian Negrescu

MUSICAL INSTRUMENT RECOGNITION USING BIOLOGICALLY INSPIRED FILTERING OF TEMPORAL DICTIONARY ATOMS

MUSI-6201 Computational Music Analysis

TOWARDS THE CHARACTERIZATION OF SINGING STYLES IN WORLD MUSIC

Music Information Retrieval

SINGING VOICE MELODY TRANSCRIPTION USING DEEP NEURAL NETWORKS

Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

Audio Feature Extraction for Corpus Analysis

... A Pseudo-Statistical Approach to Commercial Boundary Detection. Prasanna V Rangarajan Dept of Electrical Engineering Columbia University

Improving singing voice separation using attribute-aware deep network

2. AN INTROSPECTION OF THE MORPHING PROCESS

Video-based Vibrato Detection and Analysis for Polyphonic String Music

Tempo and Beat Tracking

SINGING VOICE ANALYSIS AND EDITING BASED ON MUTUALLY DEPENDENT F0 ESTIMATION AND SOURCE SEPARATION

Statistical Modeling and Retrieval of Polyphonic Music

Topic 10. Multi-pitch Analysis

An Overview of Lead and Accompaniment Separation in Music

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION

Soundprism: An Online System for Score-Informed Source Separation of Music Audio Zhiyao Duan, Student Member, IEEE, and Bryan Pardo, Member, IEEE

Automatic music transcription

Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

638 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010

Effects of acoustic degradations on cover song recognition

Proc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music

A CHROMA-BASED SALIENCE FUNCTION FOR MELODY AND BASS LINE ESTIMATION FROM MUSIC AUDIO SIGNALS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Singer Recognition and Modeling Singer Error

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

TOWARDS A GENERATIVE ELECTRONICA: HUMAN-INFORMED MACHINE TRANSCRIPTION AND ANALYSIS IN MAXMSP

Transcription:

Expanded Repeating Pattern Extraction Technique (REPET) With LPC Method for Music/Voice Separation Raju Aengala M.Tech Scholar, Department of ECE, Vardhaman College of Engineering, India. Nagajyothi D Associate Professor Department of ECE, Vardhaman College of Engineering, India. Dr.Siddaiah P Professor and Dean ANU College of Engineering and Technology, India. Abstract Repetition is a middle rule in music. Various melodic pieces are depicted by a disguised repeating structure over which differentiating parts are superimposed. This is especially legitimate for pop tunes where a craftsman as often as possible overlays contrasting vocals on a repeating reinforcement. On this introduction, we demonstrate the Expanded REpeating Pattern Extraction Technique (REPET) With LPC Method, a novel and fundamental approach for withdrawing the emphasizing "foundation" from the non-repeating "closer observe" in a blend. The fundamental idea is to perceive the irregularly reiterating pieces in the sound, balance them with a repeating area show gotten from them, and focus the repeating outlines by methods for time-repeat covering. Tests on instructive accumulations of 1,000 song clasps and 14 full-track genuine tunes showed that this methodology can be successfully associated for music/voice division, fighting with two late best in class approaches. Also investigates exhibited that REPET can in like manner be used as a preprocessor to pitch area estimations to improve tune extraction. Some commotion additionally show in foundation partition with REPET strategy. By Using proposed Method (Expanded REPET with LPC coding technique) remaining commotion/bending in foundation division is expelled. Keywords: Music structure analysis, Music/Voice separation, repeating Patterns, Tune Extraction. I. Introduction In Music Information Retrieval (MIR), researchers used emphasis/likeness on a very basic level for sound division and layout, and sometimes for musicality estimation (see Section I-1.1). In this work, we show that we can in like manner use the examination of the repeating structure in music for source separation. The ability to capably detach a tune into its music and voice portions would be of inconceivable excitement for a broad assortment of uses, among others instrument/vocalist recognizing proof, pitch/tune extraction, sound post planning, and karaoke gaming. Existing systems in music/voice division don't explicitly use the examination of the reiterating structure as an explanation behind parcel (see Section I-1.2). We embrace an on an extremely fundamental level different system to secluding the lead melody from the establishment reinforcement: find the repeating outlines in the sound and think them from the non-reiterating parts. 1.1 Music Structure Analysis In music theory, Schenker expressed that emphasis is the thing that offers rise to the possibility of the manner of thinking, which is portrayed as the most diminutive fundamental segment inside a melodic piece [1]. Ruwet used emphasis as a model for segregating music into little parts, revealing the etymological structure of the melodic piece [2]. Ockelford fought that emphasis/pantomime is the thing that passes on a demand to the music, and demand is the thing that makes music elegantly fulfilling [3]. Bartsch perceived subjects in predominant music by separating the helper reiteration in a closeness cross section worked from the chromagram [10]. Other sound Page 366

thumb nailing methodologies join Cooper et al. who created a comparability cross section using MFCCs [5]. Foote et al. developed the best range, a measure of acoustic self-likeness as a component of the slack time, by using a comparability cross section worked from the spectrogram [9]. Other beat estimation strategies fuse Pikrakiset al. who produced a similar structure using MFCCs [6]. For a comprehensive review on music structure examination alluded to [7], [11] and [12]. 1.2 Music/Voice Separation Music/voice segment systems frequently first recognize the vocal/non-vocal areas, and after that use a combination of methodology to segregate the lead vocals from the establishment backup, including spectrogram factorization, reinforcement illustrate, including spectrogram factorization, backup display learning, and pitch-based surmising strategies. Vembuet al. in the first place perceiving the vocal and non-vocal territories by handling components, for instance, MFCCs, Perceptual Linear Predictive coefficients (PLP), and Log Frequency Power Coefficients (LFPC), and using classifiers, for example, Neural Networks (NN) and Support Vector Machines (SVM). They at that point utilized Non-negative Matrix Factorization (NMF) to isolate the spectrogram into vocal and non-vocal essential segments [13]. Be that as it may, for a powerful partition, NMF requires an appropriate instatement and the correct number of segments. Raj et al. used from the most punctual known non-vocal segments to set up a reinforcement exhibit in perspective of a Probabilistic Latent Component Analysis (PLCA). They at that point settled the backup model to take in the vocal parts [14]. Ozerovet al. in the first place played out a vocal/non-vocal division utilizing MFCCs and Gaussian Mixture Models (GMM).They by then arranged Bayesian models to alter a reinforcement show picked up from the non-vocal segments [15]. Nonetheless, for a reasonable separation, such reinforcement show learning techniques require a satisfactory measure of non-vocal pieces and an exact vocal/non-vocal prior division Hsu et al. to begin with utilized a Hidden Markov Model (HMM) to recognize backup, voiced, and unvoiced sections. They at that point utilized the pitch-based derivation technique for Li et al. to disconnect the voiced vocals [16], while the pitch frame was gotten from the common pitch estimation count of Dressler [17]. In like manner, they proposed a system to separate the unvoiced vocals in perspective of GMMs and a procedure to enhance the voiced vocals in light of frightful subtraction [18]. This is a cutting edge framework we contrast with in our assessment. II. Proposed Method 2.1 Expanded REPET with LPC Method 2.1.1 REPET We show the REpeating Pattern Extraction Technique (REPET), a basic and novel approach for detaching a rehashing foundation from a non-emphasizing bleeding edge. The crucial idea is to perceive the periodically repeating segments, balance them with a reiterating piece model, and focus the repeating outlines by methods for time-repeat covering. The safeguard of this approach is that various melodic pieces can be understood as a repeating establishment over which a lead is superimposed that does not demonstrate any fast reiterating structure. For separates with a for the most part stable reiterating back-ground, we exhibit that REPET can be successfully associated for music/voice division. For full-track tunes, the reiterating establishments are most likely going to demonstrate the assortments after some time (e.g., verse taken after by chorale). Similarly, we propose a fundamental framework to extend the method to longer melodic pieces, by applying REPET on close-by windows of the banner after some time. Not in the slightest degree like other separation approaches, REPET does not depend upon particular estimations (e.g., MFCC or chroma features), does not rely upon complex structures (e.g., pitch-based Page 367

conclusion frameworks or source/channel illustrating), and does not require preprocessing (e.g., vocal/non-vocal division or prior get ready). Since it is simply in light of self-similarity, it has the advantage of being clear, snappy, and outwardly debilitated. It is in this way, absolutely and easily automatable. A parallel can be drawn among REPET and establishment subtraction. Establishment subtraction is the path toward disconnecting an establishment scene from cutting edge inquiries in a gathering of video diagrams. The fundamental idea is the same, yet the strategies are unmistakable. In establishment subtraction, neither period estimation, nor transient division is required since the video traces starting at now outline a discontinuous example. Furthermore, the assortments of the background must be dealt with in a substitute path since they incorporate qualities average of pictures. For a review on foundation subtraction. REPET bears some closeness with the drum sound recognizer of Yoshii et al. Their methodology iteratively revives time-repeat groups, contrasting with drum outlines in the spectrogram, by taking the segment savvy center of the illustrations that resemble a design, until joining. As a connection, REPET direct decides a whole repeating part appear by taking the segment astute center of all the irregularly reiterating parcels in the spectrogram Regardless of the way that REPET was described here as a system for secluding the repeating establishment from the non reiterating fore-ground in a melodic mix, it could be summed up to any kind of repeating plans. In particular, it could be used as a piece of Active Noise Control (ANC) for emptying discontinuous blocks. Applications consolidate wiping out discontinuous impedances in electrocardiography (e.g., the electrical link deterrent), or in talk signals (e.g., a pilot passing on by radio from a plane). While REPET can be associated for irregular obstacles ejection, ANC computations can't be associated for music/voice segment in view of the straightforwardness of the models used. For a review on ANC, The idea behind REPET that excess can be used for source separation has in like manner been supported by late disclosures in psychoacoustics. McDermott et al. developed that the human sound-related structure can separate solitary sources by distinguishing them as repeating plans embedded in the acoustic commitment, without requiring prior data of the source properties. Through a movement of hearing examinations, they showed that human gathering of people individuals can perceive a never-heard target sound if it goes over inside different mixes. 2.1.2 LPC METHOD Direct prescient coding (LPC) is portrayed as a modernized system for encoding a straightforward banner in which a particular regard is foreseen by a straight limit of the past estimations of the banner. It was first proposed as a procedure for encoding human talk by the United States Department of Defense in government standard 1015, dispersed in 1984. Human talk is conveyed in the vocal tract which can be approximated as a variable expansiveness tube. The direct insightful coding (LPC) show relies upon a logical figure of the vocal tract addressed by this compartment of a moving expansiveness. At a particular time, t, the talk tests(t) is addressed as a straight total of the p past cases. The most fundamental piece of the LPC is the straight farsighted channel which allows the estimation of the accompanying example to be controlled by an immediate mix of past illustrations. Under standard conditions, talk is analyzed at 8000 examples/second with 8 bits used to address every case. This gives a rate of 64000 bits/second. Guide insightful coding diminishes this to 2400 bits/second. At this diminished rate the talk has a specific produced sound and there is a discernible loss of significant worth. Since there is an information setback in coordinate farsighted coding, it is a lossy kind of weight. 2.2 Melody extraction Around there, we survey REPET as a preprocessor for two pitch disclosure figurings to upgrade tune extraction. We at first present the two pitch acknowledgment calculations (area 2.2.1). We by then show the execution measures (Section 2.2.2). We finally exhibit the extraction occurs (Section 2.2.3). Page 368

2.2.1 Pitch Detection Algorithms We have shown that REPET can be adequately associated for music/voice division. We now show that REPET can thus improve tune extraction, by using it to first separate the repeating establishment, and after that applying a pitch identification count on the voice gage to evacuate the pitch con-visit. We use two unmistakable pitch recognizable proof counts: the exceptional single central repeat (F0) estimator YIN proposed by de Cheveigné et al. in [19] and the later numerous estimator proposed by Klapuri in [20]. 2.2.2 Performance Measures To quantify execution in pitch estimation, we utilized the accuracy, review, and - measure. We portray honest to goodness genuine positive (tp) to be the amount of adequately surveyed pitch regards differentiated and the ground truth pitch frame, false positive (fp) the amount of erroneously assessed pitch esteems, and false negative (fn) the quantity of inaccurately evaluated non-pitch esteems. A pitch evaluate was managed as right if the aggregate refinement beginning from the soonest organize was under 1 semitone. Fig2: Melody extraction execution through the - measure, at voice-to-music proportions of (left segment), 0 (center section), and 5 db (right segment), utilizing YIN (best plot) and Klapuri's framework (base plot), on the mixtures (mixtures), on the voice estimates of REPET plus high-pass filtering ( R + H ), then enhanced with the best repeating period and the indices of the vocal frames (R+H+P+V), and on the voice sources (voices). 2.2.3 Extraction Results We removed the pitch shapes from the voice gages procured from REPET, including the potential redesigns, using YIN and Klapuri's system. We moreover removed the pitch shapes from the mixes and the voice sources to serve, independently, as a lower-bound and upper-bound on the execution in pitch estimation. Execution in pitch estimation was measured by using the precision, audit, and - measure, in examination with the ground truth pitch frames. III. Figures Fig3: Graphical outputs (GUI Window) IV. Conclusion In this work, We have exhibited the Expanded Repeating Pattern Extraction Technique (REPET) With LPC Coding, a novel and fundamental approach for detaching the accentuating establishment from the non-stressing frontal area in a blend. The major idea is to recognize the discontinuously reiterating parts in the sound, balance Page 369

them with a repeating segment indicates gotten from them, and think the repeating plans by methods for timerepeat covering. Tests on an enlightening record of 1,000 song cuts exhibited that REPET can be capably associated for music/voice segment, while so far demonstrating chance to show signs of improvement. More tests on an enlightening arrangement of 14 fulltrack certifiable tunes exhibited that REPET is intense to bona fide recordings and can be adequately extended to full-track tunes. Moreover tests exhibited that REPET can similarly be used as a preprocessor to pitch area figurings to upgrade tune extraction. some clamor additionally display in foundation division with REPET technique.by Using proposed Method (Expanded REPET with LPC coding strategy ) remaining commotion/twisting in foundation detachment is removed.lpc is used as a kind of voice weight by phone associations, for example in the GSM standard. It is in like manner used for secure remote, where voice must be digitized, mixed and sent over a tight voice channel; an early instance of this is the US government's Navajo I. LPC amalgamation can be used to construct vocoders where melodic instruments are used as excitation banner to the time-fluctuating channel surveyed from a craftsman's talk. This is to some degree surely understood in electronic music Acknowledgements I wish to express my deep sense of gratitude to Mrs. D.Nagajyothi, Associate Professor, Project Supervisor, Vardhaman College of Engineering, for his able guidance and useful suggestions, which helped me in completing the project work, in time. I am particularly thankful to Prof. Y. Pandurangaiah, Head, Department of Electronics and Communication Engineering for his guidance, intense support and encouragement, which helped us to mould my project into a successful one. I also thank all the staff members of Electronics and Communication Engineering department for their valuable support and generous advice. Finally thanks to all my friends and family members for their continuous support and enthusiastic help. Finally, I might want to thank the commentators for their accommodating audits. References [1] H. Schenker, Harmony. Chicago, IL: Univ. of Chicago Press, 1954. [2] N. Ruwet and M. Everist, Methods of analysis in musicology, Music Anal., vol. 6, no. 1/2, pp. 3 9+11 36, Mar.-Jul. 1987. [3] A. Ockelford, Repetition in Music: Theoretical and Meta theoretical Perspectives. Farnham, U.K.: Ashgate, 2005, vol. 13, Royal Musical Association Monographs. [4] J. Foote, Visualizing music and audio using self similarity, in Proc. 7th ACMInt. Conf. Multimedia (Part 1), Orlando, FL, Oct.-Nov. 30 05, 1999, pp. 77 80. [5] M. Cooper and J. Foote, Automatic music summarization via similarity analysis, in Proc. 3rd Int. Conf. Music Inf. Retrieval, Paris, France, Oct. 13 17, 2002, pp. 81 85. [6] A. Pikrakis, I. Antonopoulos, and S. Theodoridis, Music meter and tempo tracking from raw polyphonic audio, in Proc. 9th Int. Conf. Music Inf. Retrieval, Barcelona, Spain, Oct. 10 14, 2008. [7] G. Peeters, Deriving musical structures from signal analysis of music, audio summary generation: Sequence and state approach, in Computer Music Modeling and Retrieval, U. Wiil, Ed. Berlin/Heidelberg, Germany: Springer, 2004, vol. 2771, Lecture Notes in Computer Science, pp. 169 185. [8] J. Foote, Automatic audio segmentation using a measure of audio novelty, in Proc. IEEE Int. Conf. Multimedia and Expo, New York, Jul.-Aug. 30 02, 2000, vol. 1, pp. 452 455. [9] J. Foote and S. Uchihashi, The beat spectrum: A new approach to rhythm analysis, in Proc. IEEE Int. Conf. Multimedia and Expo, Tokyo, Japan, Aug. 22 25, 2001, pp. 881 884. [10] M. A. Bartsch, To catch a chorus using chroma based representations for audio thumb nailing, in Proc. Page 370

IEEE Workshop Applicat. Signal Process. Audio Acoust., New Paltz, NY, Oct. 21 24, 2001, pp. 15 18 [11] R. B. Dannenberg and M. Goto, Music structure analysis from acoustic signals, in Handbook of Signal Processing in Acoustics, D. Havelock, S. Kuwano, and M. Vorländer, Eds. New York: Springer, 2009, pp. 305 331. [12] J. Paulus, M. Müller, and A. Klapuri, Audio-based music structure analysis, in Proc. 11th Int. Soc. Music Inf. Retrieval, Utrecht, The Netherlands, Aug. 9 13, 2010, pp. 625 636. [13] S. Vembu and S. Baumann, Separation of vocals from polyphonic audio recordings, in Proc. 6th Int. Conf. Music Inf. Retrieval, London, U.K., Sep. 11 15, 2005, pp. 337 344. [14] B. Raj, P. Smaragdis, M. Shashanka, and R. Singh, Separating a fore-ground singer from background music, in Proc. Int. Symp. Frontiers of Res. Speech and Music, Mysore, India, May 8 9, 2007. [15] A. Ozerov, P. Philippe, F. Bimbot, and R. Gribonval, Adaptation of Bayesian models for singlechannel source separation and its application to voice/music separation in popular songs, IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 5, pp. 1564 1578, Jul. 2007. [16] Y. Li and D. Wang, Separation of singing voice from music accompaniment for monaural recordings, IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1475 1487, May 2007. [17]K. Dressler, An auditory streaming approach on melody extraction, in Proc. 7th Int. Conf. Music Inf. Retrieval (MIREX Eval.), Victoria, BC, Canada, Oct. 8 12, 2006. [18] C.-L. Hsu and J.-S. R. Jang, On the improvement of singing voice separation for monaural recordings using the MIR-1K dataset, IEEE Trans Audio, Speech, Lang. Process., vol. 18, no. 2, pp. 310 319, Feb. 2010. [19] A. de Cheveigné, YIN, A fundamental frequency estimator for speech and music, J. Acoust. Soc. Amer., vol. 111, no. 4, pp. 1917 1930, Apr. 2002. [20] A. Klapuri, Multiple fundamental frequency estimation by summing harmonic amplitudes, in Proc. 7th Int. Conf. Music Inf. Retrieval, Victoria, BC, Canada, Oct. 8 12, 2006, pp. 216 221. Author s Profile Mr. Raju Aengala is a MTech candidate in Electronics& communication Engineering, at Vardhaman College of Engineering affiliated to Jntuh. He Received BTech degree from Swami Vivekananda Institute of Technology in Secunderabad. His current Research Interests are audio analysis and including digital signal processing, speech synthesis Mrs. D. Nagajyothi obtained B.Tech degree in Electronics and Communication Engineering from Nagarjuna University, Guntur in 1999.She received her M.Tech degree from Osmania University, Hyderabad in 2006. She is pursuing PhD from ANU College of Engineering and technology, Guntur. At present D.Nagajyothi is working as an Associate professor in the department of Electronics and Communication Engineering, at Vardhaman College of Engineering, Shamshabad, Telangana, INDIA. She actively involved in research and guiding Projects in the area of Speech & Signal Processing. She has published several papers in International Conferences and Journals. She is the member of IACSIT, SAISE, ISTE, UACEE, IAENG and IETE. At present Dr P. Siddaiah is a Dean and working as a HOD in the department of Electronics and Communication Engineering, ANU College of Engineering and technology and actively involved in research and guiding students in the area of Antennas, Speech & Signal Processing,. Page 371