Music Radar: A Web-based Query by Humming System

Size: px
Start display at page:

Download "Music Radar: A Web-based Query by Humming System"

Transcription

1 Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN {cao62, pengh, zhoucm}@purdue.edu Abstract. Query by humming (QBH) means to search a piece of music by singing or humming. Given melodies hummed by the users, query by humming systems will return an ordered list of songs according to the similarity between hummings and target songs. Although there are many searching techniques for query by humming, our project is building a web-based query-by-humming system, which can find a piece of music in the digital music repository based on a few hummed notes, using a melody representation that combines with the pitch tracking. We also evaluate the performance of our system by using different data set. We evaluate the performance of our system on a public corpus given by MIREX. The exact match accuracy is 43.44%. And if the criterion is relaxed to top 10 ranking, the accuracy is increased to 75.63%. 1 Introduction Our project is to build a query-by-humming system (called the QBH system), which can find a piece of music in the digital music repository based on a few hummed notes. When the user does not know the title or any other text information about the music, he is still able to search for music by humming the melody. Query-byhumming is a much friendlier interface than existing systems for music searching on the Internet. The system we built is a web-based system. 1.1 QBH System Besides the above application value, the query-by-humming system is also an interesting topic from a scientific point of view. Identifying a musical work from a melodic fragment is a task that most people are able to accomplish with relative ease. However, how people achieve this is still unclear, i.e., how do people extract melody from a complex music piece and convert it to a representation that could be memorized and retrieved easily and accurately with tolerance of some transpositions? Although this whole question is beyond the scope of our project, we will build a system that performs like a human: it can extract melodies from music; it can convert

2 the melodies into an efficient representation and store them in its memory; when a user asks for a piece of music by humming the melody, it can first hear the query and then search in its memory for the piece that it thinks most similar to the query. The main features of this are: A melody representation, which combines both pitch and rhythmic information. New approximate melody matching algorithms based on the representation. A set of automatic transcription techniques customized for the query-byhumming system to obtain both pitch and rhythmic information. A handy tool to build a melody database from MIDI format. A deliverable query-by-humming system including both the server application and the browser application. 1.2 MIDI File Format The MIDI File is a file format used to store MIDI data (plus some other kinds of data typically needed by a sequencer). This format stores the standard MIDI messages (ie, status bytes with appropriate data bytes) plus a time-stamp for each message (ie, a series of bytes that represent how many clock pulses to wait before "playing" the event). The format also allows saving information about tempo, time and key signatures, the names of tracks and patterns, and other information typically needed by a sequencer. One MIDI file can store information for numerous patterns and tracks so that any sequencer can preserve these structures when loading the file. 1.3 Related Work Other researchers are also investigating the use of pitch tracking and dynamic programming matching [5] methods for music retrieval. Brown[7] presented a way of using autocorrelation to determine the meter of music scores. Chai[8] attempted to extract several perceptual features from the MIDI files. Most of the research in preexisting query by humming systems uses pitch contour to match similar melodies [9, 10, 11, 12]. 2 System Architecture Here is the overview of the system architecture.

3 Note Segmentation MIDI File Pitch Tracking Melody Extraction Query Construction Representation Query Request DP Match Report Generator JSON Object Browser (Javascript) Server (PHP) Figure 1 System Architecture 2.1 Browser Side Architecture Note Segmentation: The purpose of note segmentation is to identify each note s onset and offset boundaries within the acoustic signal. In order to allow segmentation based on the signal s amplitude, we ask the user to sing using the syllables like da, thus separating notes by the short drop in amplitude caused by the stop consonant. Pitch Tracking: This component is to estimate the pitch or fundamental frequency of a periodic or virtually periodic signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain or the frequency domain or both the two domains. Query Construction: The note information will be transferred to a string representation and send to the server as a HTTP GET request Report Generator: Parse the JSON object returned by the server side and generate a rank list report to the user.

4 Figure 2 Web-based QBH User Interface 2.2 Server Side Architecture Music Database: This includes the source data. The source data are the original music corpora, from which we extract melodies and generate the data representation. The source data are in MIDI files currently (but it can be extended to handle other music format). Melody Extractor: This extracts the melody information from MIDI file and transfer to a string sequence to represent the note information. DB Matcher: This receives the query from browser side, uses dynamic programming to match it with the melodies in the melody description objects, and returns a rank list for matching songs to the user as a JSON object.

5 3 Implementation 3.1 Note Segmentation The purpose of note segmentation is to identify each note in acoustic signal in order to help pitch tracking. A note is a pitched sound, which is an atom element of most Western music. After identify a note, we can easily compute its pitch information, because each note should have relatively constant frequency. Also note segmentation can filter out most of the unvoiced period. Figure 3 Waveform We can identify the onset and offset boundary of a note by the amplitude of sound. If we assumed people hum the song with syllables like da or they sing the song with words, usually there will be a short drop in amplitude between every two notes. First, we convert the amplitude from waveform to amplitude through computing the spectrogram. 512 samples window length with an overlap of 256 samples are used for 41100Hz sample rate. By summing up the absolute values of spectrums in each window, we can get a sequence of amplitude A.

6 Figure 4 Spectrogram Second, we identify the onset and offset boundary for each note based on the amplitude. The basic idea is to set a threshold and the intersections of the amplitude and the threshold are the boundaries. Using a fixed threshold for the query will lead to poor segmentation, therefore, we use dynamic thresholds. We first define a global threshold as a = 0.3 A(w). Second, we divided the sequence of amplitude into frames of length 80ms. We define the local threshold for the i frame, F, to be a = max a, 0.7 F (w). Then we scan the amplitude sequence A. If a note is not onset and the current amplitude is greater than the local threshold of the current frame, we set the current position to be the onset boundary of a new note. If the onset boundary of current note is 100ms away from current position and the current amplitude drops below the local threshold of the current frame, we set the current position to be the offset boundary of the current note.

7 Figure 5 Amplitude and Notes The global, local thresholds and minimum duration of a note is chosen by heuristic. Some note may be added or dropped occasionally by our algorithms. We will discuss their effects in evaluation section. 3.2 Pitch Tracking The primary goal of pitch tracking is to find the fundamental frequency of each note. Pitch is a subjective perception based on the frequency of acoustic signal. The fundamental frequency, f!, is the lowest frequency of a periodic waveform x. We can use it as a metric for pitch. The algorithm we used for pitch tracking is autocorrelation. Autocorrelation is the most popular time-domain method for pitch tracking, which computes the crosscorrelation of signals. First, we divided the waveform into windows with a length of 30ms and with an overlap of 20ms. For each window, we compute the nonnormalized autocorrelation for a max lag of 50Hz (882 samples for 44100Hz sample rate). The equation for non-normalized autocorrelation is N-d r N (d)= n=1 x(n)x(n+d), where N is the frame size, d is the positive lag, x(n) is the value of the n sample in the window. The fundamental frequency is selected as the lag where maximum autocorrelation is reached between 50Hz and 1000Hz, which is the frequency range of human sound. In the second step, we convert the fundamental frequency into pitch combined with information from note segmentation. We first round the fundamental frequency into

8 note number used in midi files according to the equation: m = round log 2 0. Each number represents a semitone. Then we choose the mode of note 33! numbers during a note period as the pitch. Figure 6 Note Number and Pitch 3.3 Melody Matching We compute minimum edit distance of pitch contours for melody matching. The underlying principle for this algorithm is that although most of the people are good at capturing the relative change in tunes instead of accuracy in tunes. We first use pitch contour to represent the pitch information from either pitch tracking algorithms or midi files. Second, we will use dynamic programming to compute the minimum edit distance between two pitch contours. The less the distance, the more similar the two pieces of music are. The pitch contour is a sequence of relative change in pitch. In our method only 3-level contour information is used, that is, we use 0, ±1 and ±2 to represent the changes. The pitch contour is computed for every note except the firs note as following. If a note has the same note number as the previous note, we use 0 to represent it. If a note is only one semitone higher (lower) than the previous note, we use +1 ( 1) to represent it. If a note is higher (lower) than the previous more than one semitone, we use +2 ( 2) to represent it. Therefore, a sequence of 53, 51, 45, 47, 50 will be represented as -2, -2, 2, 2. The second step is to compute the edit-distance of two pitch contours. We define that the pitch contour from the query is the pattern, 6, and the pitch contour from one midi file is the target, 7. The minimum edit-distance of matching the first 8 numbers in 6 and the first 9 numbers in 7 is : ;,<. Then the minimum edit-distance can be computed recursively:

9 L L L : ;>,< +?@A7 ;BCDEF (6 ; ) : ;,< = = : ;,<> +?@A7 GDHDFD (7 < ) : ;>,<> +?@A7 EDIHJKD (7 <, 6 ; ) By heuristic, we define the costs to be:?@a7 ;BCDEF (6 ; ) = MNA(6 ; ) + 1 =?@A7 GDHDFD (7 < ) = MNAO7 < P + 1?@A7 EDIHJKD (7 <, 6 ; ) = MNAO6 ; 7 < P Because the pattern usually matches a part of the target, we should allow inserting numbers at the beginning and deleting numbers at the end with no cost. Therefore, we define the initial condition to be: 0, if 8 = 0 : ;,< = Q 8, if 9 = 0 L And we modify the update function for when 8 = 6 : : ;>,< +?@A7 ;BCDEF (6 ; ) : ;,< = = : ;,<> : ;>,<> +?@A7 EDIHJKD (7 <, 6 ; ) 4 Evaluation Our system is evaluated on a public corpus, MIR-QBSH, from Music Information Retrieval Evaluation exchange (MIREX) campaign. The corpus contains 48 midi files as ground-truth and 4431 queries created from 2003 to 2009 by 195 different people. Different from general information retrieval systems, the number of relevant retrieved items is either 0 or 1 for QBH system. And since our system returns the rank of midi files in the database based on the Edit distance to the query, common measures like precision and recall, are appropriate to evaluate QBH system. Therefore, we choose a rank-based method to present our experimental results, which is called Top-k-Accuracy. We define a query to be successful if the relevant midi file is within top k items in the returned rank list. Top-k-Accuracy = n R N S n R is the number of successful queries for k. N S is the total number of queries. Top- 1-Accuracy means the proportion of exact matches. A closer to 1 value of Top-k- Accuracy indicates a better results. We also define the hardness of songs and proficiency of singers.

10 Song hardness = Singer ProZiciency = N SX R r X N S R X r X ] ;< is the rank of 8th singer on 9th song. ^_; is the number of singers that have a query on 9th song. ^_; is the number of songs that are queried by 8th singer. A smaller hardness means the song is more difficult to perform, while a larger proficiency suggests the singer might be more proficient. Table 1 is the Top-k- Accuracy for different k values K TOP-K-ACCURACY Table 1 Top-K-Accuracy for Different K Values Figure 7 and 8 show the song hardness and singer proficiency for the corpus we used. According to our observation of the results. We found that there are a few factors that may affect our matching results. Out-of-tune singing. If the singer cannot catch the change of tune, our system can hardly give a good matching result. Low voice quality, especially when the voice of the singer is overwhelmed by the background noise. Short query. The system needs an enough long query to match the unique sequence of a song. But the exact minimum duration of query largely depends on the song and the quality of the query. The average of the duration is around 6~8 seconds.

11 0.5 Song Hardness Hardness Songs Figure 7 Hardness of All Songs in Corpus 1 Singer Proficiency Proficiency Singers Figure 8 Proficiency of All Singers 5 Conclusion In this project, we build a web-based query-by-humming system, use pitch tracking and dynamic programming matching method. For future work, we need to test our system on different classes of music (i.e. pop music, country music). We also can try

12 other matching methods like DTW (Dynamic Time Warping), HMM (Hidden Markov Model) and compare the performance. References 1.Asif Ghias and Jonathan Logan and David Chamberlin and Brian C. Smith, Query by humming: musical information retrieval in an audio database, In ACM Multimedia, Jyh-Shing Roger Jang, MIR Corpora: 3. MATLAB and MIDI: 4. Pitch Detection: 5. Wei Chai, Melody Retrieval On The WebMaster Thesis at the Massachusetts Institute of Technology, M.I.T Media Laboratory, Fall MIDI: 7. Brown, Judith C. Determination of the meter of musical scores by autocorrelation. J. Acoust. Soc. Am. 94:4, Oct Chai, Wei and Vercoe, Barry. Using user models in music information retrieval systems. Proc. International Symposium on Music Information Retrieval, Oct A. Ghias, J. Logan, D. Chamberlin, and B. C. Smith., Query by humming: Musical information retrieval in an audio database. In ACM Multimedia 1995, pages , D. Q. Goldin and P. C. Kanellakis. On similarity queries for time-series data: Constraint specification and implementation. In Proceedings of the 1 st International Conference on Principles and Practice of Constraint Programming (CP'95), J. M. Hellerstein, J. F. Naughton, and A. Pfeffer. Generalized search trees for database systems. InU. Dayal, P. M. D. Gray, and S. Nishio, editors, Proc. 21st Int. Conf. Very Large Data Bases, VLDB, pages Morgan Kaufmann, J.-S. R. Jang and H.-R. Lee. Hierarchical filtering method for content-based music retrieval via acoustic input. In Proceedings of the ninth ACM international conference on Multimedia, pages ACM Press,2001.

Melody Retrieval On The Web

Melody Retrieval On The Web Melody Retrieval On The Web Thesis proposal for the degree of Master of Science at the Massachusetts Institute of Technology M.I.T Media Laboratory Fall 2000 Thesis supervisor: Barry Vercoe Professor,

More information

TANSEN: A QUERY-BY-HUMMING BASED MUSIC RETRIEVAL SYSTEM. M. Anand Raju, Bharat Sundaram* and Preeti Rao

TANSEN: A QUERY-BY-HUMMING BASED MUSIC RETRIEVAL SYSTEM. M. Anand Raju, Bharat Sundaram* and Preeti Rao TANSEN: A QUERY-BY-HUMMING BASE MUSIC RETRIEVAL SYSTEM M. Anand Raju, Bharat Sundaram* and Preeti Rao epartment of Electrical Engineering, Indian Institute of Technology, Bombay Powai, Mumbai 400076 {maji,prao}@ee.iitb.ac.in

More information

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,

More information

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based

More information

Music Information Retrieval Using Audio Input

Music Information Retrieval Using Audio Input Music Information Retrieval Using Audio Input Lloyd A. Smith, Rodger J. McNab and Ian H. Witten Department of Computer Science University of Waikato Private Bag 35 Hamilton, New Zealand {las, rjmcnab,

More information

A Music Retrieval System Using Melody and Lyric

A Music Retrieval System Using Melody and Lyric 202 IEEE International Conference on Multimedia and Expo Workshops A Music Retrieval System Using Melody and Lyric Zhiyuan Guo, Qiang Wang, Gang Liu, Jun Guo, Yueming Lu 2 Pattern Recognition and Intelligent

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Monophonic pitch extraction George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 32 Table of Contents I 1 Motivation and Terminology 2 Psychacoustics 3 F0

More information

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu

More information

Music Database Retrieval Based on Spectral Similarity

Music Database Retrieval Based on Spectral Similarity Music Database Retrieval Based on Spectral Similarity Cheng Yang Department of Computer Science Stanford University yangc@cs.stanford.edu Abstract We present an efficient algorithm to retrieve similar

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Dalwon Jang 1, Seungjae Lee 2, Jun Seok Lee 2, Minho Jin 1, Jin S. Seo 2, Sunil Lee 1 and Chang D. Yoo 1 1 Korea Advanced

More information

Music Structure Analysis

Music Structure Analysis Lecture Music Processing Music Structure Analysis Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

Introductions to Music Information Retrieval

Introductions to Music Information Retrieval Introductions to Music Information Retrieval ECE 272/472 Audio Signal Processing Bochen Li University of Rochester Wish List For music learners/performers While I play the piano, turn the page for me Tell

More information

Comparison of Dictionary-Based Approaches to Automatic Repeating Melody Extraction

Comparison of Dictionary-Based Approaches to Automatic Repeating Melody Extraction Comparison of Dictionary-Based Approaches to Automatic Repeating Melody Extraction Hsuan-Huei Shih, Shrikanth S. Narayanan and C.-C. Jay Kuo Integrated Media Systems Center and Department of Electrical

More information

A Query-by-singing Technique for Retrieving Polyphonic Objects of Popular Music

A Query-by-singing Technique for Retrieving Polyphonic Objects of Popular Music A Query-by-singing Technique for Retrieving Polyphonic Objects of Popular Music Hung-Ming Yu, Wei-Ho Tsai, and Hsin-Min Wang Institute of Information Science, Academia Sinica, Taipei, Taiwan, Republic

More information

Melody transcription for interactive applications

Melody transcription for interactive applications Melody transcription for interactive applications Rodger J. McNab and Lloyd A. Smith {rjmcnab,las}@cs.waikato.ac.nz Department of Computer Science University of Waikato, Private Bag 3105 Hamilton, New

More information

Music Segmentation Using Markov Chain Methods

Music Segmentation Using Markov Chain Methods Music Segmentation Using Markov Chain Methods Paul Finkelstein March 8, 2011 Abstract This paper will present just how far the use of Markov Chains has spread in the 21 st century. We will explain some

More information

Query By Humming: Finding Songs in a Polyphonic Database

Query By Humming: Finding Songs in a Polyphonic Database Query By Humming: Finding Songs in a Polyphonic Database John Duchi Computer Science Department Stanford University jduchi@stanford.edu Benjamin Phipps Computer Science Department Stanford University bphipps@stanford.edu

More information

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis Semi-automated extraction of expressive performance information from acoustic recordings of piano music Andrew Earis Outline Parameters of expressive piano performance Scientific techniques: Fourier transform

More information

Automatic music transcription

Automatic music transcription Music transcription 1 Music transcription 2 Automatic music transcription Sources: * Klapuri, Introduction to music transcription, 2006. www.cs.tut.fi/sgn/arg/klap/amt-intro.pdf * Klapuri, Eronen, Astola:

More information

Repeating Pattern Extraction Technique(REPET);A method for music/voice separation.

Repeating Pattern Extraction Technique(REPET);A method for music/voice separation. Repeating Pattern Extraction Technique(REPET);A method for music/voice separation. Wakchaure Amol Jalindar 1, Mulajkar R.M. 2, Dhede V.M. 3, Kote S.V. 4 1 Student,M.E(Signal Processing), JCOE Kuran, Maharashtra,India

More information

A LYRICS-MATCHING QBH SYSTEM FOR INTER- ACTIVE ENVIRONMENTS

A LYRICS-MATCHING QBH SYSTEM FOR INTER- ACTIVE ENVIRONMENTS A LYRICS-MATCHING QBH SYSTEM FOR INTER- ACTIVE ENVIRONMENTS Panagiotis Papiotis Music Technology Group, Universitat Pompeu Fabra panos.papiotis@gmail.com Hendrik Purwins Music Technology Group, Universitat

More information

IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING 1. Note Segmentation and Quantization for Music Information Retrieval

IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING 1. Note Segmentation and Quantization for Music Information Retrieval IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING 1 Note Segmentation and Quantization for Music Information Retrieval Norman H. Adams, Student Member, IEEE, Mark A. Bartsch, Member, IEEE, and Gregory H.

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

Statistical Modeling and Retrieval of Polyphonic Music

Statistical Modeling and Retrieval of Polyphonic Music Statistical Modeling and Retrieval of Polyphonic Music Erdem Unal Panayiotis G. Georgiou and Shrikanth S. Narayanan Speech Analysis and Interpretation Laboratory University of Southern California Los Angeles,

More information

Tempo and Beat Analysis

Tempo and Beat Analysis Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:

More information

AUTOMATIC ACCOMPANIMENT OF VOCAL MELODIES IN THE CONTEXT OF POPULAR MUSIC

AUTOMATIC ACCOMPANIMENT OF VOCAL MELODIES IN THE CONTEXT OF POPULAR MUSIC AUTOMATIC ACCOMPANIMENT OF VOCAL MELODIES IN THE CONTEXT OF POPULAR MUSIC A Thesis Presented to The Academic Faculty by Xiang Cao In Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Voice & Music Pattern Extraction: A Review

Voice & Music Pattern Extraction: A Review Voice & Music Pattern Extraction: A Review 1 Pooja Gautam 1 and B S Kaushik 2 Electronics & Telecommunication Department RCET, Bhilai, Bhilai (C.G.) India pooja0309pari@gmail.com 2 Electrical & Instrumentation

More information

NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE STUDY

NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE STUDY Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Limerick, Ireland, December 6-8,2 NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE

More information

Singing Pitch Extraction and Singing Voice Separation

Singing Pitch Extraction and Singing Voice Separation Singing Pitch Extraction and Singing Voice Separation Advisor: Jyh-Shing Roger Jang Presenter: Chao-Ling Hsu Multimedia Information Retrieval Lab (MIR) Department of Computer Science National Tsing Hua

More information

An Audio Front End for Query-by-Humming Systems

An Audio Front End for Query-by-Humming Systems An Audio Front End for Query-by-Humming Systems Goffredo Haus Emanuele Pollastri L.I.M.-Laboratorio di Informatica Musicale, Dipartimento di Scienze dell Informazione, Università Statale di Milano via

More information

Creating Data Resources for Designing User-centric Frontends for Query by Humming Systems

Creating Data Resources for Designing User-centric Frontends for Query by Humming Systems Creating Data Resources for Designing User-centric Frontends for Query by Humming Systems Erdem Unal S. S. Narayanan H.-H. Shih Elaine Chew C.-C. Jay Kuo Speech Analysis and Interpretation Laboratory,

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

A prototype system for rule-based expressive modifications of audio recordings

A prototype system for rule-based expressive modifications of audio recordings International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications

More information

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,

More information

A Study of Synchronization of Audio Data with Symbolic Data. Music254 Project Report Spring 2007 SongHui Chon

A Study of Synchronization of Audio Data with Symbolic Data. Music254 Project Report Spring 2007 SongHui Chon A Study of Synchronization of Audio Data with Symbolic Data Music254 Project Report Spring 2007 SongHui Chon Abstract This paper provides an overview of the problem of audio and symbolic synchronization.

More information

Efficient Vocal Melody Extraction from Polyphonic Music Signals

Efficient Vocal Melody Extraction from Polyphonic Music Signals http://dx.doi.org/1.5755/j1.eee.19.6.4575 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 1392-1215, VOL. 19, NO. 6, 213 Efficient Vocal Melody Extraction from Polyphonic Music Signals G. Yao 1,2, Y. Zheng 1,2, L.

More information

The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval

The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval IPEM, Dept. of musicology, Ghent University, Belgium Outline About the MAMI project Aim of the

More information

Automatic Construction of Synthetic Musical Instruments and Performers

Automatic Construction of Synthetic Musical Instruments and Performers Ph.D. Thesis Proposal Automatic Construction of Synthetic Musical Instruments and Performers Ning Hu Carnegie Mellon University Thesis Committee Roger B. Dannenberg, Chair Michael S. Lewicki Richard M.

More information

Proc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music

Proc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music A Melody Detection User Interface for Polyphonic Music Sachin Pant, Vishweshwara Rao, and Preeti Rao Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai 400076, India Email:

More information

HUMMING METHOD FOR CONTENT-BASED MUSIC INFORMATION RETRIEVAL

HUMMING METHOD FOR CONTENT-BASED MUSIC INFORMATION RETRIEVAL 12th International Society for Music Information Retrieval Conference (ISMIR 211) HUMMING METHOD FOR CONTENT-BASED MUSIC INFORMATION RETRIEVAL Cristina de la Bandera, Ana M. Barbancho, Lorenzo J. Tardón,

More information

Tune Retrieval in the Multimedia Library

Tune Retrieval in the Multimedia Library Tune Retrieval in the Multimedia Library Rodger J. McNab 1, Lloyd A. Smith 1, Ian H. Witten 1 and Clare L. Henderson 2 1 Department of Computer Science 2 School of Education University of Waikato, Hamilton,

More information

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music

More information

Transcription of the Singing Melody in Polyphonic Music

Transcription of the Singing Melody in Polyphonic Music Transcription of the Singing Melody in Polyphonic Music Matti Ryynänen and Anssi Klapuri Institute of Signal Processing, Tampere University Of Technology P.O.Box 553, FI-33101 Tampere, Finland {matti.ryynanen,

More information

Music Representations

Music Representations Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

Algorithms for melody search and transcription. Antti Laaksonen

Algorithms for melody search and transcription. Antti Laaksonen Department of Computer Science Series of Publications A Report A-2015-5 Algorithms for melody search and transcription Antti Laaksonen To be presented, with the permission of the Faculty of Science of

More information

Topic 4. Single Pitch Detection

Topic 4. Single Pitch Detection Topic 4 Single Pitch Detection What is pitch? A perceptual attribute, so subjective Only defined for (quasi) harmonic sounds Harmonic sounds are periodic, and the period is 1/F0. Can be reliably matched

More information

THE importance of music content analysis for musical

THE importance of music content analysis for musical IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2007 333 Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With

More information

A Note Based Query By Humming System using Convolutional Neural Network

A Note Based Query By Humming System using Convolutional Neural Network INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden A Note Based Query By Humming System using Convolutional Neural Network Naziba Mostafa, Pascale Fung The Hong Kong University of Science and Technology

More information

A System for Acoustic Chord Transcription and Key Extraction from Audio Using Hidden Markov models Trained on Synthesized Audio

A System for Acoustic Chord Transcription and Key Extraction from Audio Using Hidden Markov models Trained on Synthesized Audio Curriculum Vitae Kyogu Lee Advanced Technology Center, Gracenote Inc. 2000 Powell Street, Suite 1380 Emeryville, CA 94608 USA Tel) 1-510-428-7296 Fax) 1-510-547-9681 klee@gracenote.com kglee@ccrma.stanford.edu

More information

AN ALGORITHM FOR LOCATING FUNDAMENTAL FREQUENCY (F0) MARKERS IN SPEECH

AN ALGORITHM FOR LOCATING FUNDAMENTAL FREQUENCY (F0) MARKERS IN SPEECH AN ALGORITHM FOR LOCATING FUNDAMENTAL FREQUENCY (F0) MARKERS IN SPEECH by Princy Dikshit B.E (C.S) July 2000, Mangalore University, India A Thesis Submitted to the Faculty of Old Dominion University in

More information

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION Graham E. Poliner and Daniel P.W. Ellis LabROSA, Dept. of Electrical Engineering Columbia University, New York NY 127 USA {graham,dpwe}@ee.columbia.edu

More information

SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION

SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION th International Society for Music Information Retrieval Conference (ISMIR ) SINGING PITCH EXTRACTION BY VOICE VIBRATO/TREMOLO ESTIMATION AND INSTRUMENT PARTIAL DELETION Chao-Ling Hsu Jyh-Shing Roger Jang

More information

Automatic Piano Music Transcription

Automatic Piano Music Transcription Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening

More information

Melodic Outline Extraction Method for Non-note-level Melody Editing

Melodic Outline Extraction Method for Non-note-level Melody Editing Melodic Outline Extraction Method for Non-note-level Melody Editing Yuichi Tsuchiya Nihon University tsuchiya@kthrlab.jp Tetsuro Kitahara Nihon University kitahara@kthrlab.jp ABSTRACT In this paper, we

More information

Interacting with a Virtual Conductor

Interacting with a Virtual Conductor Interacting with a Virtual Conductor Pieter Bos, Dennis Reidsma, Zsófia Ruttkay, Anton Nijholt HMI, Dept. of CS, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands anijholt@ewi.utwente.nl

More information

The song remains the same: identifying versions of the same piece using tonal descriptors

The song remains the same: identifying versions of the same piece using tonal descriptors The song remains the same: identifying versions of the same piece using tonal descriptors Emilia Gómez Music Technology Group, Universitat Pompeu Fabra Ocata, 83, Barcelona emilia.gomez@iua.upf.edu Abstract

More information

Music Alignment and Applications. Introduction

Music Alignment and Applications. Introduction Music Alignment and Applications Roger B. Dannenberg Schools of Computer Science, Art, and Music Introduction Music information comes in many forms Digital Audio Multi-track Audio Music Notation MIDI Structured

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Pattern Based Melody Matching Approach to Music Information Retrieval

Pattern Based Melody Matching Approach to Music Information Retrieval Pattern Based Melody Matching Approach to Music Information Retrieval 1 D.Vikram and 2 M.Shashi 1,2 Department of CSSE, College of Engineering, Andhra University, India 1 daravikram@yahoo.co.in, 2 smogalla2000@yahoo.com

More information

Semantic Segmentation and Summarization of Music

Semantic Segmentation and Summarization of Music [ Wei Chai ] DIGITALVISION, ARTVILLE (CAMERAS, TV, AND CASSETTE TAPE) STOCKBYTE (KEYBOARD) Semantic Segmentation and Summarization of Music [Methods based on tonality and recurrent structure] Listening

More information

Pattern Recognition in Music

Pattern Recognition in Music Pattern Recognition in Music SAMBA/07/02 Line Eikvil Ragnar Bang Huseby February 2002 Copyright Norsk Regnesentral NR-notat/NR Note Tittel/Title: Pattern Recognition in Music Dato/Date: February År/Year:

More information

A probabilistic framework for audio-based tonal key and chord recognition

A probabilistic framework for audio-based tonal key and chord recognition A probabilistic framework for audio-based tonal key and chord recognition Benoit Catteau 1, Jean-Pierre Martens 1, and Marc Leman 2 1 ELIS - Electronics & Information Systems, Ghent University, Gent (Belgium)

More information

A Fast Alignment Scheme for Automatic OCR Evaluation of Books

A Fast Alignment Scheme for Automatic OCR Evaluation of Books A Fast Alignment Scheme for Automatic OCR Evaluation of Books Ismet Zeki Yalniz, R. Manmatha Multimedia Indexing and Retrieval Group Dept. of Computer Science, University of Massachusetts Amherst, MA,

More information

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Montserrat Puiggròs, Emilia Gómez, Rafael Ramírez, Xavier Serra Music technology Group Universitat Pompeu Fabra

More information

Singer Recognition and Modeling Singer Error

Singer Recognition and Modeling Singer Error Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing

More information

Soundprism: An Online System for Score-Informed Source Separation of Music Audio Zhiyao Duan, Student Member, IEEE, and Bryan Pardo, Member, IEEE

Soundprism: An Online System for Score-Informed Source Separation of Music Audio Zhiyao Duan, Student Member, IEEE, and Bryan Pardo, Member, IEEE IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 6, OCTOBER 2011 1205 Soundprism: An Online System for Score-Informed Source Separation of Music Audio Zhiyao Duan, Student Member, IEEE,

More information

IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC

IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC Ashwin Lele #, Saurabh Pinjani #, Kaustuv Kanti Ganguli, and Preeti Rao Department of Electrical Engineering, Indian

More information

Toward Evaluation Techniques for Music Similarity

Toward Evaluation Techniques for Music Similarity Toward Evaluation Techniques for Music Similarity Beth Logan, Daniel P.W. Ellis 1, Adam Berenzweig 1 Cambridge Research Laboratory HP Laboratories Cambridge HPL-2003-159 July 29 th, 2003* E-mail: Beth.Logan@hp.com,

More information

Figure 1: Feature Vector Sequence Generator block diagram.

Figure 1: Feature Vector Sequence Generator block diagram. 1 Introduction Figure 1: Feature Vector Sequence Generator block diagram. We propose designing a simple isolated word speech recognition system in Verilog. Our design is naturally divided into two modules.

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Music structure information is

Music structure information is Feature Article Automatic Structure Detection for Popular Music Our proposed approach detects music structures by looking at beatspace segmentation, chords, singing-voice boundaries, and melody- and content-based

More information

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t MPEG-7 FOR CONTENT-BASED MUSIC PROCESSING Λ Emilia GÓMEZ, Fabien GOUYON, Perfecto HERRERA and Xavier AMATRIAIN Music Technology Group, Universitat Pompeu Fabra, Barcelona, SPAIN http://www.iua.upf.es/mtg

More information

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for

More information

A Pattern Recognition Approach for Melody Track Selection in MIDI Files

A Pattern Recognition Approach for Melody Track Selection in MIDI Files A Pattern Recognition Approach for Melody Track Selection in MIDI Files David Rizo, Pedro J. Ponce de León, Carlos Pérez-Sancho, Antonio Pertusa, José M. Iñesta Departamento de Lenguajes y Sistemas Informáticos

More information

Audio Structure Analysis

Audio Structure Analysis Tutorial T3 A Basic Introduction to Audio-Related Music Information Retrieval Audio Structure Analysis Meinard Müller, Christof Weiß International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de,

More information

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Fengyan Wu fengyanyy@163.com Shutao Sun stsun@cuc.edu.cn Weiyao Xue Wyxue_std@163.com Abstract Automatic extraction of

More information

Evaluation of Melody Similarity Measures

Evaluation of Melody Similarity Measures Evaluation of Melody Similarity Measures by Matthew Brian Kelly A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science Queen s University

More information

Acoustic Measurements Using Common Computer Accessories: Do Try This at Home. Dale H. Litwhiler, Terrance D. Lovell

Acoustic Measurements Using Common Computer Accessories: Do Try This at Home. Dale H. Litwhiler, Terrance D. Lovell Abstract Acoustic Measurements Using Common Computer Accessories: Do Try This at Home Dale H. Litwhiler, Terrance D. Lovell Penn State Berks-LehighValley College This paper presents some simple techniques

More information

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach Carlos Guedes New York University email: carlos.guedes@nyu.edu Abstract In this paper, I present a possible approach for

More information

2. AN INTROSPECTION OF THE MORPHING PROCESS

2. AN INTROSPECTION OF THE MORPHING PROCESS 1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,

More information

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca

More information

The dangers of parsimony in query-by-humming applications

The dangers of parsimony in query-by-humming applications The dangers of parsimony in query-by-humming applications Colin Meek University of Michigan Beal Avenue Ann Arbor MI 489 USA meek@umich.edu William P. Birmingham University of Michigan Beal Avenue Ann

More information

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES 12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou

More information

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS Mutian Fu 1 Guangyu Xia 2 Roger Dannenberg 2 Larry Wasserman 2 1 School of Music, Carnegie Mellon University, USA 2 School of Computer

More information

... A Pseudo-Statistical Approach to Commercial Boundary Detection. Prasanna V Rangarajan Dept of Electrical Engineering Columbia University

... A Pseudo-Statistical Approach to Commercial Boundary Detection. Prasanna V Rangarajan Dept of Electrical Engineering Columbia University A Pseudo-Statistical Approach to Commercial Boundary Detection........ Prasanna V Rangarajan Dept of Electrical Engineering Columbia University pvr2001@columbia.edu 1. Introduction Searching and browsing

More information

Audio. Meinard Müller. Beethoven, Bach, and Billions of Bytes. International Audio Laboratories Erlangen. International Audio Laboratories Erlangen

Audio. Meinard Müller. Beethoven, Bach, and Billions of Bytes. International Audio Laboratories Erlangen. International Audio Laboratories Erlangen Meinard Müller Beethoven, Bach, and Billions of Bytes When Music meets Computer Science Meinard Müller International Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de School of Mathematics University

More information

A wavelet-based approach to the discovery of themes and sections in monophonic melodies Velarde, Gissel; Meredith, David

A wavelet-based approach to the discovery of themes and sections in monophonic melodies Velarde, Gissel; Meredith, David Aalborg Universitet A wavelet-based approach to the discovery of themes and sections in monophonic melodies Velarde, Gissel; Meredith, David Publication date: 2014 Document Version Accepted author manuscript,

More information

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller)

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller) Topic 11 Score-Informed Source Separation (chroma slides adapted from Meinard Mueller) Why Score-informed Source Separation? Audio source separation is useful Music transcription, remixing, search Non-satisfying

More information

Transcription An Historical Overview

Transcription An Historical Overview Transcription An Historical Overview By Daniel McEnnis 1/20 Overview of the Overview In the Beginning: early transcription systems Piszczalski, Moorer Note Detection Piszczalski, Foster, Chafe, Katayose,

More information

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

Content-based Music Structure Analysis with Applications to Music Semantics Understanding

Content-based Music Structure Analysis with Applications to Music Semantics Understanding Content-based Music Structure Analysis with Applications to Music Semantics Understanding Namunu C Maddage,, Changsheng Xu, Mohan S Kankanhalli, Xi Shao, Institute for Infocomm Research Heng Mui Keng Terrace

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Creating data resources for designing usercentric frontends for query-by-humming systems

Creating data resources for designing usercentric frontends for query-by-humming systems Multimedia Systems (5) : 1 9 DOI 1.17/s53-5-176-5 REGULAR PAPER Erdem Unal S. S. Narayanan H.-H. Shih Elaine Chew C.-C. Jay Kuo Creating data resources for designing usercentric frontends for query-by-humming

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

AN ON-THE-FLY MANDARIN SINGING VOICE SYNTHESIS SYSTEM

AN ON-THE-FLY MANDARIN SINGING VOICE SYNTHESIS SYSTEM AN ON-THE-FLY MANDARIN SINGING VOICE SYNTHESIS SYSTEM Cheng-Yuan Lin*, J.-S. Roger Jang*, and Shaw-Hwa Hwang** *Dept. of Computer Science, National Tsing Hua University, Taiwan **Dept. of Electrical Engineering,

More information

Music Representations. Beethoven, Bach, and Billions of Bytes. Music. Research Goals. Piano Roll Representation. Player Piano (1900)

Music Representations. Beethoven, Bach, and Billions of Bytes. Music. Research Goals. Piano Roll Representation. Player Piano (1900) Music Representations Lecture Music Processing Sheet Music (Image) CD / MP3 (Audio) MusicXML (Text) Beethoven, Bach, and Billions of Bytes New Alliances between Music and Computer Science Dance / Motion

More information

Audio-Based Video Editing with Two-Channel Microphone

Audio-Based Video Editing with Two-Channel Microphone Audio-Based Video Editing with Two-Channel Microphone Tetsuya Takiguchi Organization of Advanced Science and Technology Kobe University, Japan takigu@kobe-u.ac.jp Yasuo Ariki Organization of Advanced Science

More information