Music genre classification using a hierarchical long short term memory (LSTM) model

Size: px
Start display at page:

Download "Music genre classification using a hierarchical long short term memory (LSTM) model"

Transcription

1 Chun Pui Tang, Ka Long Chui, Ying Kin Yu, Zhiliang Zeng, Kin Hong Wong, "Music Genre classification using a hierarchical Long Short Term Memory (LSTM) model", International Workshop on Pattern Recognition IWPR 2018,University of Jinan, Jinan, China, May 26-28, 2018 Music genre classification using a hierarchical long short term memory (LSTM) model Chun Pui Tang, Ka Long Chui, Ying Kin Yu, Zhiliang Zeng, and Kin Hong Wong Department of Computer Science and Engineering, The Chinese University of Hong Kong ABSTRACT This paper examines the application of Long Short Term Memory (LSTM) model in music genre classification. We explore two different approaches in the paper. (1) In the first method, we use one single LSTM to directly classify 6 different genres of music. The method is implemented and the results are shown and discussed. (2) The first approach is only good for 6 or less genres. So in the second approach, we adopt a hierarchical divideand-conquer strategy to achieve 10 genres classification. In this approach, music is classified into strong and mild genre classes. Strong genre includes hiphop, metal, pop, rock and reggae because usually they have heavier and stronger beats. The mild class includes jazz, disco, country, classic and blues because they tend to be softer musically. We further divide the sub-classes into sub-subclasses to help with the classification. Firstly, we classify an input piece into strong or mild class. Then for each subclass, we further classify them until one of the ten final classes is identified. For the implementation, each subclass classification module is implemented using a LSTM. Our hierarchical divide-and-conquer idea is built and tested. The average classification accuracy of this approach for 10-genre classification is 50.00%, which is higher than the state-of-the-art approach that uses a single convolutional neural network. From our experimental results, we show that this hierarchical scheme improves the classification accuracy significantly. Keywords: Computer Music, LSTM, Music Genre Classification 1. INTRODUCTION Nowadays, machine learning has been widely applied to many different fields, for examples healthcare, marketing, security and information retrieval. Artificial neural network is one of the most effective techniques that is good at solving classification and prediction problems. In this project, we apply an artificial neural network to music genre classification. Our target is to classify music into different genres, for example 6-10 different genres. Our algorithm is very useful for the user to search for their favorite music pieces. The applications of machine learning techniques to music classification is not common compared to image classification. Tao et. al. 1 created a deep learning model that can identify the music from at most 4 different genres in a dataset. The method is also mentioned in a journal paper. 2 In this project, we make use of the Long Short-Term Memory (LSTM) model instead of CNN in the music genre classification problem. We are able to train a model that can classify music from 6 to 8 different genres. Furthermore, we adopt a divide-and-conquer scheme to further improve the accuracy. Firstly, we divide the music into two classes, namely the strong and mild class. A LSTM classifier is trained to categorize music into these two classes. Then the music is further classified into a number of subclasses. From our experimental results, we show that this hierarchical scheme improves the classification accuracy. Our paper is organized as follows. In Section 2, we introduce the background of our work. In Section 3, the theory used is discussed. In Section 4, we describe the implementation details and the classification results of our LSTM approaches. The discussion and conclusion are found in Sections 5 and 6, respectively. 2. BACKGROUND Research related to music is interesting and has many commercial applications. Machine learning can provide elegant solutions to some problems in music signal processing, such as beat detection, music emotion recognition and chord recognition. In 2013, Van den Oord etal 3 published a survey on deep learning in music. Moreover, * khwong@cse.cuhk.edu.hk, This work is supported by a direct grant (Project Code: ) from the Faculty of Engineering of the Chinese University of Hong Kong.

2 Table 1. Results of the system described in 1 Number of Genres Testing set 2 genres (Classic, Metal) 98.15% 3 genres (Classic, Metal, Blues) 69.16% 4 genres (Classic, Metal, Blues, Disco) 51.88% Figure 1. An overview of the music genre classification process. Arora et al 4 developed a project on environmental sound classification. Also, a music composition robot called BachBot is created by Feynman Liang. 5 It uses LSTM to create J.S. Bach style music pieces. In this paper, we are interested in applying machine learning to music genre classification. Feng et. al. 1 devised an algorithm to classify music into in 2 to 4 genres. Their results are summarized in Table 1. From the table, we can see that the accuracy of classifying 2 genres is 98.15%. However, when the number of genres is increased to 4, the accuracy is reduced by 17%. Another work that tackles the same problem is proposed by Matan Lachmish. 6 Their approach uses the convolutional neural network model. They achieved an accuracy of 46.87% in classifying music into 10 different genres. In this paper, we are going to use the LSTM model to solve the music classification problem. To the best of our knowledge, we believe that we are one of the first groups to solve this problem using LSTM. 3. THEORY Figure 1 illustrates the overall structure of our project. We use the Gtzan 7 music dataset to train our system. We apply the Librosa library 8 to extract audio features, i.e. the Mel-frequency cepstral coefficients (MFCC), from the raw data. The extracted features are input to the Long Short-Term Memory (LSTM) neural network model for training. Our LSTM are built with Keras 9 and Tensorflow Mel frequency cepstral coefficients (MFCC) MFCC features are commonly used for speech recognition, music genre classification and audio signal similarity measurement. The computation of MFCC has already been discussed in various papers. 11 We will focus on how to apply the MFCC data for our application. In practice, we use the Librosa library to extract the MFCCs from the audio tracks. 3.2 The Long Short Term Memory Network (LSTM) Approaches such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are popular machine learning framework. The LSTM network used in this project is a subclass of RNN. RNN is different from the traditional neural networks. It can memorize the past data and is able to predict with the help of the information stored in the memory. Moreover, LSTM solves the RNN long term dependencies problem. Although

3 Figure 2. A typical LSTM model contains four interacting layers. 12 Figure 3. The LSTM network used in our music genre classification problem. RNN model can make use of the past information to predict the current state, the RNN model may fail to link up the information when the gap between the past information and the current state is too large. The details of the long-term dependencies have been discussed in a tutorial, 12 Figure 2 reveals the structure of a typical LSTM model. Figure 3 shows the configuration of our LSTM network. The network has 4 layers. A LSTM can be formulated mathematcially as follows: u t = tanh(w xu x t + W hu h t 1 + b u ) : update equation i t = σ(w xi x t + W hi h t 1 + b i ), input gate equation f t = σ(w xf x t + W hf h t 1 + b f ), forget gate equation o t = σ(w xo x t + W ho h t 1 + b o ), output gate equation c t = i t u t + f t c t 1, cell state h t = tanh c t o t, cell output output class = σ(h t W outpara ) (1) where W xu, W xi, W xf, W xo and W hu, W hi, W hf, W ho, W outpara are weights, and b u, b i, b f, b o are biases to be computed during training. h t is the output of a neuron at time t. denotes pointwise multiplication. σ() denotes a sigma function and tanh() represents the tanh function. The input x t is the MFCC parameters at time t. output class is the classification output. 4.1 Our dataset 4. IMPLEMENTATION AND EXPERIMENTAL RESULTS We used the GTZAN dataset 13 that contains of various samples of the ten music genres in our experiments. The genres are blues, classic, country, disco, hip-hop, jazz, metal, pop, reggae and rock. Each genre includes Table 2. The design of our LSTM network in experiment 1. Input Layer (I) 13 MFCCs features obtained as input Hidden layer (II) 128 neurons Hidden Layer (III) 32 neurons Output Layer (IV) 6 outputs corresponding to 6 different genres of music

4 Figure 4. Sample waveforms of different music genres. Figure 5. Visualization of Mel frequency cepstrum. 100 soundtracks of 30 seconds long in.au format. We randomly chose samples from the dataset for training and testing. Using the script written by Kamil Wojcicki, 14 we created the waveforms of the soundtracks and compared their similarity. Samples of the waveforms are shown in Figure 4. 30% of the data are used for testing and 70% of the data are used for training. The testing and training dataset are not overlapped. We compared the waveforms of 10 different genres. It is found that blues is similar to jazz and country. Rock is similar to pop and reggae. So we decided to use music from the classic, hip-hop, jazz, metal, pop and reggae to form the six genres for training in our first experiment. 4.2 Preprocessing Before we can use the data in the GTZAN dataset, we need to preprocess the signals so that they can be input to the Long Short Term Memory (LSMT) model. MFCC is a good representation of music signals. It is one of the best indicators of the brightness of the sound. In practice, it is able to measure the timbre of the music by the method discussed in the paper by Emery Schubert etal. 15 We used the Librosa library 8 to transform the raw data from GTZAN into MFCC features. In particular, we chose the frame size as 25ms. Each 30-second soundtrack has 1293 frames and 13 MFCC features, which are C1 to C13 in experiment. There are 14 MFCC features, which are C0 to C13 in experiment 2. Figure 5 shows some examples of the Mel frequency cepstrum plots of the music signals in the database. 4.3 Experiment 1 : LSTM for 6-genre classification In this experiment, there are 420 audio tracks in the dataset for training, 120 for validation and 60 for testing. Each audio track lasts for 30 seconds. We set the batch size that defines the number of samples to be propagated through the network for training as 35. We can see that the accuracy and loss are improving within 20 Epochs. At 20, the test accuracy reaches the maximum and the loss is minimized. We achieved a classification accuracy of around 0.5 to 0.6. There are still some rooms for improvement. With more training samples, we may be able to achieve an accuracy of 0.6 to 0.7. The major limitation is the small training data size. It leads to low accuracy and overfitting. Although some genres, such as metal, are outstanding and easy to be recognized, it is hard to classify some other genres that are quite similar. From Figure 6, there is a overlapping of some features among different genres. For instance, we can observe

5 Figure 6. Classification of music genres in GTZAN dataset 16, that the data points of pop music overlap with other genres. It is reasonable because pop songs include features of other genres. 4.4 Experiment 2 : The hierarchical approach for 10-genre classification A divide-and-conquer scheme is employed. In our scheme, we applied 7 LSTMs. Then a multi-step classifier involving all these 7 LSTM classifiers were used to achieve 10-grenre classification. The division of samples for training and testing is the same as that in experiment 1. The LSTM classifiers involved are listed below. ˆ LSTM1: It classifies music into strong (hiphop, metal, pop, rock and reggae) and mild (jazz, disco, country, classic and blues) group. ˆ LSTM2a: It divides the music into Sub-strong1 (hiphop, metal and rock) and Sub-strong2 (pop and reggae) classes. During training, only music samples of hiphop, metal, rock, pop and reggae are involved. ˆ LSTM2b: It categorizes music into Sub-mild1(disco and country) and Sub-mild2 (jazz, classic and blues) groups. We used samples only from disco, country, jazz, classic and blues for training. ˆ LSTM3a: It classifies music into hiphop, metal and rock. Only music from hiphop, metal and rock class are involved. ˆ LSTM3b: It differentiates pop music from reggae ˆ LSTM3c: It differentiates disco music from country. ˆ LSTM3d: It recognizes jazz, classic and blues. The proposed multi-step classifier involves the 7 LSTMs above. In the testing stage, the input music is first classified by LSTM1 to find if it is strong or mild. Then according to the result, either LSTM2a or LSTM2b is applied. Finally, LSTM3a, 3b, 3c or 3d is used to classify the music into the target categories according to the results obtained in the previous level. Results of this experiment are shown in Table 3. Our approach achieved an accuracy of 50.00%. It was better than the state-of-the-art approach based on convolutional neural network, which had an accuracy of 46.87%. 6 A diagram showing the hierarchy of the LSTMs in our multi-step classifier is shown in Figure 7.

6 Figure 7. The hierarchy of the LSTMs in our multi-step classifier., Table 3. Results of experiment 2. The accuracy of each LSTM component in the proposed multi-step classifier. LSTM classifiers Accuracy Epochs LSTM1 (strong, mild) 80.0% 35 LSTM2a (sub-stong1, sub-strong2) 81.6% 20 LSTM2b (sub-mild1, sub-mild2) 81.6% 35 LSTM3a (hiphop, metal, rock) 74.6% 40 LSTM3b (pop, reggae) 88.0% 20 LSTM3c (disco, country) 78.0% 20 LSTM3d (jazz, classic, blues) 84.0% 40 Our multi-step classifier for all 10 genres 50.0% N/A 5. CONCLUSION In conclusion, the experimental results show that our multi-step classifier based on Long Short-Term Memory (LSTM) model is effective in recognizing music genres. For 6-genre classification, the accuracy was 50-60% using a single LSTM. We also used a divide-and-conquer approach to classify 10 genres of music. We achieved an accuracy of 50.00%, which was better than one of the state-of-the-art approaches having an accuracy of 46.87%. 6 REFERENCES [1] Tao Feng. Deep learning for music genre classification, university of illinois.[online]. engr.illinois.edu/ece544na/fa2014/tao_feng.pdf. Accessed: 16- April [2] Lin Feng, Sheng-lan Liu, and Jianing Yao. Music genre classification with paralleling recurrent convolutional neural network. CoRR, abs/ , [3] Aaron van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages Curran Associates, Inc., [4] Raman Arora and Robert A Lutfi. An efficient code for environmental sound classification. The Journal of the Acoustical Society of America, 126(1):7 10, [5] Feynman Liang. BachBot: Automatic composition in the style of Bach chorales. PhD thesis, Masters thesis, University of Cambridge, [6] Mlachmish. Music genre classification with cnn. MusicGenreClassification/blob/master/README.md. Accessed: 16- April [7] Bob L Sturm. An analysis of the gtzan music genre dataset. In Proceedings of the second international ACM workshop on Music information retrieval with user-centered and multimodal strategies, pages ACM, 2012.

7 [8] Brian McFee, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, pages 18 25, [9] Francois Chollet. Deep learning with Python. Manning Publications Co., [10] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages , [11] Md Sahidullah, Sandipan Chakroborty, and Goutam Saha. Improving performance of speaker identification system using complementary information fusion. arxiv preprint arxiv: , [12] Christopher Olah. Understanding lstm networks. GITHUB blog, posted on August, 27:2015, [13] GTZAN. Gtzan genre data set. Accessed: 16- April [14] Kamil Wojcicki. Htk mfcc matlab. MATLAB Central File Exchange, [15] Emery Schubert, Joe Wolfe, and Alex Tarnopolsky. Spectral centroid and timbre in complex, multiple instrumental textures. In Proceedings of the international conference on music perception and cognition, North Western University, Illinois, pages sn, [16] Arthur Flexer. Improving visualization of high-dimensional music similarity spaces. In ISMIR, pages , 2015.

Music Genre Classification and Variance Comparison on Number of Genres

Music Genre Classification and Variance Comparison on Number of Genres Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques

More information

Music Genre Classification

Music Genre Classification Music Genre Classification chunya25 Fall 2017 1 Introduction A genre is defined as a category of artistic composition, characterized by similarities in form, style, or subject matter. [1] Some researchers

More information

Deep learning for music data processing

Deep learning for music data processing Deep learning for music data processing A personal (re)view of the state-of-the-art Jordi Pons www.jordipons.me Music Technology Group, DTIC, Universitat Pompeu Fabra, Barcelona. 31st January 2017 Jordi

More information

Audio spectrogram representations for processing with Convolutional Neural Networks

Audio spectrogram representations for processing with Convolutional Neural Networks Audio spectrogram representations for processing with Convolutional Neural Networks Lonce Wyse 1 1 National University of Singapore arxiv:1706.09559v1 [cs.sd] 29 Jun 2017 One of the decisions that arise

More information

An AI Approach to Automatic Natural Music Transcription

An AI Approach to Automatic Natural Music Transcription An AI Approach to Automatic Natural Music Transcription Michael Bereket Stanford University Stanford, CA mbereket@stanford.edu Karey Shi Stanford Univeristy Stanford, CA kareyshi@stanford.edu Abstract

More information

MUSI-6201 Computational Music Analysis

MUSI-6201 Computational Music Analysis MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)

More information

Supervised Learning in Genre Classification

Supervised Learning in Genre Classification Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music

More information

arxiv: v1 [cs.lg] 16 Dec 2017

arxiv: v1 [cs.lg] 16 Dec 2017 AUTOMATIC MUSIC HIGHLIGHT EXTRACTION USING CONVOLUTIONAL RECURRENT ATTENTION NETWORKS Jung-Woo Ha 1, Adrian Kim 1,2, Chanju Kim 2, Jangyeon Park 2, and Sung Kim 1,3 1 Clova AI Research and 2 Clova Music,

More information

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for

More information

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC Scientific Journal of Impact Factor (SJIF): 5.71 International Journal of Advance Engineering and Research Development Volume 5, Issue 04, April -2018 e-issn (O): 2348-4470 p-issn (P): 2348-6406 MUSICAL

More information

Automatic Music Clustering using Audio Attributes

Automatic Music Clustering using Audio Attributes Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,

More information

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Aric Bartle (abartle@stanford.edu) December 14, 2012 1 Background The field of composer recognition has

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Singer Traits Identification using Deep Neural Network

Singer Traits Identification using Deep Neural Network Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic

More information

Music Composition with RNN

Music Composition with RNN Music Composition with RNN Jason Wang Department of Statistics Stanford University zwang01@stanford.edu Abstract Music composition is an interesting problem that tests the creativity capacities of artificial

More information

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? NICHOLAS BORG AND GEORGE HOKKANEN Abstract. The possibility of a hit song prediction algorithm is both academically interesting and industry motivated.

More information

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the

More information

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr

More information

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Fengyan Wu fengyanyy@163.com Shutao Sun stsun@cuc.edu.cn Weiyao Xue Wyxue_std@163.com Abstract Automatic extraction of

More information

Chord Classification of an Audio Signal using Artificial Neural Network

Chord Classification of an Audio Signal using Artificial Neural Network Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

arxiv: v1 [cs.lg] 15 Jun 2016

arxiv: v1 [cs.lg] 15 Jun 2016 Deep Learning for Music arxiv:1606.04930v1 [cs.lg] 15 Jun 2016 Allen Huang Department of Management Science and Engineering Stanford University allenh@cs.stanford.edu Abstract Raymond Wu Department of

More information

LSTM Neural Style Transfer in Music Using Computational Musicology

LSTM Neural Style Transfer in Music Using Computational Musicology LSTM Neural Style Transfer in Music Using Computational Musicology Jett Oristaglio Dartmouth College, June 4 2017 1. Introduction In the 2016 paper A Neural Algorithm of Artistic Style, Gatys et al. discovered

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional

More information

Image-to-Markup Generation with Coarse-to-Fine Attention

Image-to-Markup Generation with Coarse-to-Fine Attention Image-to-Markup Generation with Coarse-to-Fine Attention Presenter: Ceyer Wakilpoor Yuntian Deng 1 Anssi Kanervisto 2 Alexander M. Rush 1 Harvard University 3 University of Eastern Finland ICML, 2017 Yuntian

More information

Automatic Music Genre Classification

Automatic Music Genre Classification Automatic Music Genre Classification Nathan YongHoon Kwon, SUNY Binghamton Ingrid Tchakoua, Jackson State University Matthew Pietrosanu, University of Alberta Freya Fu, Colorado State University Yue Wang,

More information

Semi-supervised Musical Instrument Recognition

Semi-supervised Musical Instrument Recognition Semi-supervised Musical Instrument Recognition Master s Thesis Presentation Aleksandr Diment 1 1 Tampere niversity of Technology, Finland Supervisors: Adj.Prof. Tuomas Virtanen, MSc Toni Heittola 17 May

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons Róisín Loughran roisin.loughran@ul.ie Jacqueline Walker jacqueline.walker@ul.ie Michael O Neill University

More information

MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface

MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface 1st Author 1st author's affiliation 1st line of address 2nd line of address Telephone number, incl. country code 1st author's

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

RoboMozart: Generating music using LSTM networks trained per-tick on a MIDI collection with short music segments as input.

RoboMozart: Generating music using LSTM networks trained per-tick on a MIDI collection with short music segments as input. RoboMozart: Generating music using LSTM networks trained per-tick on a MIDI collection with short music segments as input. Joseph Weel 10321624 Bachelor thesis Credits: 18 EC Bachelor Opleiding Kunstmatige

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

Distortion Analysis Of Tamil Language Characters Recognition

Distortion Analysis Of Tamil Language Characters Recognition www.ijcsi.org 390 Distortion Analysis Of Tamil Language Characters Recognition Gowri.N 1, R. Bhaskaran 2, 1. T.B.A.K. College for Women, Kilakarai, 2. School Of Mathematics, Madurai Kamaraj University,

More information

Automatic Musical Pattern Feature Extraction Using Convolutional Neural Network

Automatic Musical Pattern Feature Extraction Using Convolutional Neural Network Automatic Musical Pattern Feature Extraction Using Convolutional Neural Network Tom LH. Li, Antoni B. Chan and Andy HW. Chun Abstract Music genre classification has been a challenging yet promising task

More information

Shimon the Robot Film Composer and DeepScore

Shimon the Robot Film Composer and DeepScore Shimon the Robot Film Composer and DeepScore Richard Savery and Gil Weinberg Georgia Institute of Technology {rsavery3, gilw} @gatech.edu Abstract. Composing for a film requires developing an understanding

More information

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES PACS: 43.60.Lq Hacihabiboglu, Huseyin 1,2 ; Canagarajah C. Nishan 2 1 Sonic Arts Research Centre (SARC) School of Computer Science Queen s University

More information

Music Recommendation from Song Sets

Music Recommendation from Song Sets Music Recommendation from Song Sets Beth Logan Cambridge Research Laboratory HP Laboratories Cambridge HPL-2004-148 August 30, 2004* E-mail: Beth.Logan@hp.com music analysis, information retrieval, multimedia

More information

Features for Audio and Music Classification

Features for Audio and Music Classification Features for Audio and Music Classification Martin F. McKinney and Jeroen Breebaart Auditory and Multisensory Perception, Digital Signal Processing Group Philips Research Laboratories Eindhoven, The Netherlands

More information

A Transfer Learning Based Feature Extractor for Polyphonic Sound Event Detection Using Connectionist Temporal Classification

A Transfer Learning Based Feature Extractor for Polyphonic Sound Event Detection Using Connectionist Temporal Classification INTERSPEECH 17 August, 17, Stockholm, Sweden A Transfer Learning Based Feature Extractor for Polyphonic Sound Event Detection Using Connectionist Temporal Classification Yun Wang and Florian Metze Language

More information

A Music Retrieval System Using Melody and Lyric

A Music Retrieval System Using Melody and Lyric 202 IEEE International Conference on Multimedia and Expo Workshops A Music Retrieval System Using Melody and Lyric Zhiyuan Guo, Qiang Wang, Gang Liu, Jun Guo, Yueming Lu 2 Pattern Recognition and Intelligent

More information

Effects of acoustic degradations on cover song recognition

Effects of acoustic degradations on cover song recognition Signal Processing in Acoustics: Paper 68 Effects of acoustic degradations on cover song recognition Julien Osmalskyj (a), Jean-Jacques Embrechts (b) (a) University of Liège, Belgium, josmalsky@ulg.ac.be

More information

DeepID: Deep Learning for Face Recognition. Department of Electronic Engineering,

DeepID: Deep Learning for Face Recognition. Department of Electronic Engineering, DeepID: Deep Learning for Face Recognition Xiaogang Wang Department of Electronic Engineering, The Chinese University i of Hong Kong Machine Learning with Big Data Machine learning with small data: overfitting,

More information

Predicting Similar Songs Using Musical Structure Armin Namavari, Blake Howell, Gene Lewis

Predicting Similar Songs Using Musical Structure Armin Namavari, Blake Howell, Gene Lewis Predicting Similar Songs Using Musical Structure Armin Namavari, Blake Howell, Gene Lewis 1 Introduction In this work we propose a music genre classification method that directly analyzes the structure

More information

Predicting Aesthetic Radar Map Using a Hierarchical Multi-task Network

Predicting Aesthetic Radar Map Using a Hierarchical Multi-task Network Predicting Aesthetic Radar Map Using a Hierarchical Multi-task Network Xin Jin 1,2,LeWu 1, Xinghui Zhou 1, Geng Zhao 1, Xiaokun Zhang 1, Xiaodong Li 1, and Shiming Ge 3(B) 1 Department of Cyber Security,

More information

Lyrics Classification using Naive Bayes

Lyrics Classification using Naive Bayes Lyrics Classification using Naive Bayes Dalibor Bužić *, Jasminka Dobša ** * College for Information Technologies, Klaićeva 7, Zagreb, Croatia ** Faculty of Organization and Informatics, Pavlinska 2, Varaždin,

More information

A STUDY ON LSTM NETWORKS FOR POLYPHONIC MUSIC SEQUENCE MODELLING

A STUDY ON LSTM NETWORKS FOR POLYPHONIC MUSIC SEQUENCE MODELLING A STUDY ON LSTM NETWORKS FOR POLYPHONIC MUSIC SEQUENCE MODELLING Adrien Ycart and Emmanouil Benetos Centre for Digital Music, Queen Mary University of London, UK {a.ycart, emmanouil.benetos}@qmul.ac.uk

More information

Talking Drums: Generating drum grooves with neural networks

Talking Drums: Generating drum grooves with neural networks Talking Drums: Generating drum grooves with neural networks P. Hutchings 1 1 Monash University, Melbourne, Australia arxiv:1706.09558v1 [cs.sd] 29 Jun 2017 Presented is a method of generating a full drum

More information

Speech and Speaker Recognition for the Command of an Industrial Robot

Speech and Speaker Recognition for the Command of an Industrial Robot Speech and Speaker Recognition for the Command of an Industrial Robot CLAUDIA MOISA*, HELGA SILAGHI*, ANDREI SILAGHI** *Dept. of Electric Drives and Automation University of Oradea University Street, nr.

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox 1803707 knoxm@eecs.berkeley.edu December 1, 006 Abstract We built a system to automatically detect laughter from acoustic features of audio. To implement the system,

More information

Neural Network for Music Instrument Identi cation

Neural Network for Music Instrument Identi cation Neural Network for Music Instrument Identi cation Zhiwen Zhang(MSE), Hanze Tu(CCRMA), Yuan Li(CCRMA) SUN ID: zhiwen, hanze, yuanli92 Abstract - In the context of music, instrument identi cation would contribute

More information

Using Genre Classification to Make Content-based Music Recommendations

Using Genre Classification to Make Content-based Music Recommendations Using Genre Classification to Make Content-based Music Recommendations Robbie Jones (rmjones@stanford.edu) and Karen Lu (karenlu@stanford.edu) CS 221, Autumn 2016 Stanford University I. Introduction Our

More information

A Categorical Approach for Recognizing Emotional Effects of Music

A Categorical Approach for Recognizing Emotional Effects of Music A Categorical Approach for Recognizing Emotional Effects of Music Mohsen Sahraei Ardakani 1 and Ehsan Arbabi School of Electrical and Computer Engineering, College of Engineering, University of Tehran,

More information

SentiMozart: Music Generation based on Emotions

SentiMozart: Music Generation based on Emotions SentiMozart: Music Generation based on Emotions Rishi Madhok 1,, Shivali Goel 2, and Shweta Garg 1, 1 Department of Computer Science and Engineering, Delhi Technological University, New Delhi, India 2

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

arxiv: v1 [cs.ir] 16 Jan 2019

arxiv: v1 [cs.ir] 16 Jan 2019 It s Only Words And Words Are All I Have Manash Pratim Barman 1, Kavish Dahekar 2, Abhinav Anshuman 3, and Amit Awekar 4 1 Indian Institute of Information Technology, Guwahati 2 SAP Labs, Bengaluru 3 Dell

More information

Detecting Musical Key with Supervised Learning

Detecting Musical Key with Supervised Learning Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different

More information

Automated sound generation based on image colour spectrum with using the recurrent neural network

Automated sound generation based on image colour spectrum with using the recurrent neural network Automated sound generation based on image colour spectrum with using the recurrent neural network N A Nikitin 1, V L Rozaliev 1, Yu A Orlova 1 and A V Alekseev 1 1 Volgograd State Technical University,

More information

A Discriminative Approach to Topic-based Citation Recommendation

A Discriminative Approach to Topic-based Citation Recommendation A Discriminative Approach to Topic-based Citation Recommendation Jie Tang and Jing Zhang Department of Computer Science and Technology, Tsinghua University, Beijing, 100084. China jietang@tsinghua.edu.cn,zhangjing@keg.cs.tsinghua.edu.cn

More information

The Million Song Dataset

The Million Song Dataset The Million Song Dataset AUDIO FEATURES The Million Song Dataset There is no data like more data Bob Mercer of IBM (1985). T. Bertin-Mahieux, D.P.W. Ellis, B. Whitman, P. Lamere, The Million Song Dataset,

More information

arxiv: v1 [cs.sd] 5 Apr 2017

arxiv: v1 [cs.sd] 5 Apr 2017 REVISITING THE PROBLEM OF AUDIO-BASED HIT SONG PREDICTION USING CONVOLUTIONAL NEURAL NETWORKS Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen Research Center for Information Technology

More information

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca

More information

arxiv: v1 [cs.sd] 18 Oct 2017

arxiv: v1 [cs.sd] 18 Oct 2017 REPRESENTATION LEARNING OF MUSIC USING ARTIST LABELS Jiyoung Park 1, Jongpil Lee 1, Jangyeon Park 2, Jung-Woo Ha 2, Juhan Nam 1 1 Graduate School of Culture Technology, KAIST, 2 NAVER corp., Seongnam,

More information

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies Judy Franklin Computer Science Department Smith College Northampton, MA 01063 Abstract Recurrent (neural) networks have

More information

Using Deep Learning to Annotate Karaoke Songs

Using Deep Learning to Annotate Karaoke Songs Distributed Computing Using Deep Learning to Annotate Karaoke Songs Semester Thesis Juliette Faille faillej@student.ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH

More information

Towards End-to-End Raw Audio Music Synthesis

Towards End-to-End Raw Audio Music Synthesis To be published in: Proceedings of the 27th Conference on Artificial Neural Networks (ICANN), Rhodes, Greece, 2018. (Author s Preprint) Towards End-to-End Raw Audio Music Synthesis Manfred Eppe, Tayfun

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk

More information

An Introduction to Deep Image Aesthetics

An Introduction to Deep Image Aesthetics Seminar in Laboratory of Visual Intelligence and Pattern Analysis (VIPA) An Introduction to Deep Image Aesthetics Yongcheng Jing College of Computer Science and Technology Zhejiang University Zhenchuan

More information

Automatic Piano Music Transcription

Automatic Piano Music Transcription Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening

More information

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular Music Mood Sheng Xu, Albert Peyton, Ryan Bhular What is Music Mood A psychological & musical topic Human emotions conveyed in music can be comprehended from two aspects: Lyrics Music Factors that affect

More information

CREATING all forms of art [1], [2], [3], [4], including

CREATING all forms of art [1], [2], [3], [4], including Grammar Argumented LSTM Neural Networks with Note-Level Encoding for Music Composition Zheng Sun, Jiaqi Liu, Zewang Zhang, Jingwen Chen, Zhao Huo, Ching Hua Lee, and Xiao Zhang 1 arxiv:1611.05416v1 [cs.lg]

More information

GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA

GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA Ming-Ju Wu Computer Science Department National Tsing Hua University Hsinchu, Taiwan brian.wu@mirlab.org Jyh-Shing Roger Jang Computer

More information

Recommending Music for Language Learning: The Problem of Singing Voice Intelligibility

Recommending Music for Language Learning: The Problem of Singing Voice Intelligibility Recommending Music for Language Learning: The Problem of Singing Voice Intelligibility Karim M. Ibrahim (M.Sc.,Nile University, Cairo, 2016) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT

More information

Rewind: A Music Transcription Method

Rewind: A Music Transcription Method University of Nevada, Reno Rewind: A Music Transcription Method A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science and Engineering by

More information

Generating Chinese Classical Poems Based on Images

Generating Chinese Classical Poems Based on Images , March 14-16, 2018, Hong Kong Generating Chinese Classical Poems Based on Images Xiaoyu Wang, Xian Zhong, Lin Li 1 Abstract With the development of the artificial intelligence technology, Chinese classical

More information

Homework 2 Key-finding algorithm

Homework 2 Key-finding algorithm Homework 2 Key-finding algorithm Li Su Research Center for IT Innovation, Academia, Taiwan lisu@citi.sinica.edu.tw (You don t need any solid understanding about the musical key before doing this homework,

More information

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

Music Emotion Recognition. Jaesung Lee. Chung-Ang University Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or

More information

Singing voice synthesis based on deep neural networks

Singing voice synthesis based on deep neural networks INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Singing voice synthesis based on deep neural networks Masanari Nishimura, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, and Keiichi Tokuda

More information

Classification of Timbre Similarity

Classification of Timbre Similarity Classification of Timbre Similarity Corey Kereliuk McGill University March 15, 2007 1 / 16 1 Definition of Timbre What Timbre is Not What Timbre is A 2-dimensional Timbre Space 2 3 Considerations Common

More information

Melody Retrieval On The Web

Melody Retrieval On The Web Melody Retrieval On The Web Thesis proposal for the degree of Master of Science at the Massachusetts Institute of Technology M.I.T Media Laboratory Fall 2000 Thesis supervisor: Barry Vercoe Professor,

More information

Acoustic Scene Classification

Acoustic Scene Classification Acoustic Scene Classification Marc-Christoph Gerasch Seminar Topics in Computer Music - Acoustic Scene Classification 6/24/2015 1 Outline Acoustic Scene Classification - definition History and state of

More information

Toward Multi-Modal Music Emotion Classification

Toward Multi-Modal Music Emotion Classification Toward Multi-Modal Music Emotion Classification Yi-Hsuan Yang 1, Yu-Ching Lin 1, Heng-Tze Cheng 1, I-Bin Liao 2, Yeh-Chin Ho 2, and Homer H. Chen 1 1 National Taiwan University 2 Telecommunication Laboratories,

More information

Singer Identification

Singer Identification Singer Identification Bertrand SCHERRER McGill University March 15, 2007 Bertrand SCHERRER (McGill University) Singer Identification March 15, 2007 1 / 27 Outline 1 Introduction Applications Challenges

More information

Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017

Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017 Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017 Background Abstract I attempted a solution at using machine learning to compose music given a large corpus

More information

Automatic Identification of Instrument Type in Music Signal using Wavelet and MFCC

Automatic Identification of Instrument Type in Music Signal using Wavelet and MFCC Automatic Identification of Instrument Type in Music Signal using Wavelet and MFCC Arijit Ghosal, Rudrasis Chakraborty, Bibhas Chandra Dhara +, and Sanjoy Kumar Saha! * CSE Dept., Institute of Technology

More information

Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations

Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations Hendrik Vincent Koops 1, W. Bas de Haas 2, Jeroen Bransen 2, and Anja Volk 1 arxiv:1706.09552v1 [cs.sd]

More information

Music Information Retrieval with Temporal Features and Timbre

Music Information Retrieval with Temporal Features and Timbre Music Information Retrieval with Temporal Features and Timbre Angelina A. Tzacheva and Keith J. Bell University of South Carolina Upstate, Department of Informatics 800 University Way, Spartanburg, SC

More information

Deep Jammer: A Music Generation Model

Deep Jammer: A Music Generation Model Deep Jammer: A Music Generation Model Justin Svegliato and Sam Witty College of Information and Computer Sciences University of Massachusetts Amherst, MA 01003, USA {jsvegliato,switty}@cs.umass.edu Abstract

More information

JOINT BEAT AND DOWNBEAT TRACKING WITH RECURRENT NEURAL NETWORKS

JOINT BEAT AND DOWNBEAT TRACKING WITH RECURRENT NEURAL NETWORKS JOINT BEAT AND DOWNBEAT TRACKING WITH RECURRENT NEURAL NETWORKS Sebastian Böck, Florian Krebs, and Gerhard Widmer Department of Computational Perception Johannes Kepler University Linz, Austria sebastian.boeck@jku.at

More information

Music Generation from MIDI datasets

Music Generation from MIDI datasets Music Generation from MIDI datasets Moritz Hilscher, Novin Shahroudi 2 Institute of Computer Science, University of Tartu moritz.hilscher@student.hpi.de, 2 novin@ut.ee Abstract. Many approaches are being

More information

CHORD GENERATION FROM SYMBOLIC MELODY USING BLSTM NETWORKS

CHORD GENERATION FROM SYMBOLIC MELODY USING BLSTM NETWORKS CHORD GENERATION FROM SYMBOLIC MELODY USING BLSTM NETWORKS Hyungui Lim 1,2, Seungyeon Rhyu 1 and Kyogu Lee 1,2 3 Music and Audio Research Group, Graduate School of Convergence Science and Technology 4

More information

Music BCI ( )

Music BCI ( ) Music BCI (006-2015) Matthias Treder, Benjamin Blankertz Technische Universität Berlin, Berlin, Germany September 5, 2016 1 Introduction We investigated the suitability of musical stimuli for use in a

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AN HMM BASED INVESTIGATION OF DIFFERENCES BETWEEN MUSICAL INSTRUMENTS OF THE SAME TYPE PACS: 43.75.-z Eichner, Matthias; Wolff, Matthias;

More information

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Kate Park, Annie Hu, Natalie Muenster Email: katepark@stanford.edu, anniehu@stanford.edu, ncm000@stanford.edu Abstract We propose

More information

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National

More information

Audio Cover Song Identification using Convolutional Neural Network

Audio Cover Song Identification using Convolutional Neural Network Audio Cover Song Identification using Convolutional Neural Network Sungkyun Chang 1,4, Juheon Lee 2,4, Sang Keun Choe 3,4 and Kyogu Lee 1,4 Music and Audio Research Group 1, College of Liberal Studies

More information

Topics in Computer Music Instrument Identification. Ioanna Karydi

Topics in Computer Music Instrument Identification. Ioanna Karydi Topics in Computer Music Instrument Identification Ioanna Karydi Presentation overview What is instrument identification? Sound attributes & Timbre Human performance The ideal algorithm Selected approaches

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Dalwon Jang 1, Seungjae Lee 2, Jun Seok Lee 2, Minho Jin 1, Jin S. Seo 2, Sunil Lee 1 and Chang D. Yoo 1 1 Korea Advanced

More information

COMBINING FEATURES REDUCES HUBNESS IN AUDIO SIMILARITY

COMBINING FEATURES REDUCES HUBNESS IN AUDIO SIMILARITY COMBINING FEATURES REDUCES HUBNESS IN AUDIO SIMILARITY Arthur Flexer, 1 Dominik Schnitzer, 1,2 Martin Gasser, 1 Tim Pohle 2 1 Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria

More information