Multimodal Music Mood Classification Framework for Christian Kokborok Music
|
|
- Annabella Neal
- 5 years ago
- Views:
Transcription
1 Journal of Engineering Technology (ISSN ) Volume 8, Issue 1, Jan. 2019, PP Multimodal Music Mood Classification Framework for Christian Kokborok Music Sanchali Das 1*, Sambit Satpathy 2, Swapan Debbarma Department of Computer Science and Engineering, National Institute of Technology, Agartala, India. *Corresponding Author Abstract: This article describes one of the applications of Music information retrieval (MIR) integrated with natural language processing. We are working on one of the poor resourced languages. The proposed work represents one of the applications of MIR that is music mood classification of one of the North-eastern regional language which is Kokborok music. It is widely spoken in the states of North East (NE) India and many other countries like Nepal, Bhutan, Myanmar and Bangladesh. The selection of the song is very specific to Christian Kokborok songs and Christianity deeply related to the Bible which has written in the recognised Romanized language, and it is accepted worldwide. We develop the multimodal corpus for audio and lyrics for Kokborok song and performed coarse-grained annotation to create mood annotated dataset and then perform classification task on both audio and lyrics separately. We projected mood taxonomy for Christian Kokborok songs and set a mood annotated corpus with the corresponding taxonomy. Initially, we used 48 parameters for audio classification and six Text stylistic feature for lyrics based classification. The SVM classifier is used with linear kernel function for classification. Finally, Mood classification system was developed for Kokborok song consist of three different systems based on audio, lyrics and multimodal (audio and lyrics together). We also compared different classifier used to get the system performance for the above three systems. We achieved 95% accuracy for audio, 97% for lyrics and multimodal system, and the accuracy rate is about 96%. Keywords: Kokborok Christian Song, Multimodal Mood Classification, Music Information Retrieval, Natural Language Processing, Weka. 1. Introduction The present work is about one of the MIR research application along with Natural language processing techniques [1, 7-11, 14, 15, 17]. In work, we choose the music mood classification task as an application of MIR. We created dataset comprised of 300 songs of Christian Kokborok music along with their corresponding lyrics. Then we created suitable mood taxonomy for the database. As for our knowledge, there is no mood annotated dataset for Kokborok is available, so annotation is done manually to create a mood annotated dataset which is used as a ground truth set for the classification task. We then perform mood classification task on audio files and lyrics database separately and together also (multimodal classification). Maximum researchers had worked on audio and lyrics classification on western music and explore between the difference between Hindi and English [10,16,17], and some of the researchers have used Indian languages like Hindi for the mood classification task [7-11]. There is a decidedly less work done on any regional languages like classical for mood classification [14, 15, 22]. It has been seen that western language and some specific language is used for MIR field 506
2 whereas poor resourced language and dialects are deprived so we tried to do some fundamental work that can be extended and help the researcher for Kokborok community. We choose a regional language which is Kokborok and widely spoken in the north-eastern states of India and other countries like Bangladesh and Myanmar, Nepal, Bhutan too. The Christian community has intensive analysis on Christianity in Kokborok people of Tripura in the era of 1932 to 1988 by New Zealand Baptist community and about 50 years the Christian community in Kokborok people spread in Tripura. As of 2015, there are 840, and the total number of Kokborok Christian members is more than 98,000 in Tripura [26, 27]. We, as the researcher of Tripura, has initiated a research work towards a less resourced language like Kokborok and integrate natural languages processing techniques and Music Information retrieval for this kind of underresourced language. In the next section, we describe the related work in MIR field, the third section is about proposed work mood taxonomy, part four mentioned about the feature selection for audio and lyrics based classification, section five describes classification results and evaluation, and comparison and conclusion and future works is described in section Related Works 2.1 Data set and Taxonomy Mood taxonomy is the set of adjectives by which any dataset can be represented in the best way. There are several taxonomies available, i.e. Russell's taxonomy (figure.1), MIREX, Havner's taxonomy (figure.2) [4, 18]. For Indian song, Havner's and Russell's taxonomy is found to be better fitted. Preparing a large data set of songs with similar lyrics and audio files is an essential work for the mood classification task. Mood annotated dataset is required to find the mood attached with every song considering both lyrics and audio. Considering Indian music information retrieval task, very less amount of work is done like on Hindi [2,3,7-13]. In [20], find the electronic user interface where music mood tagging is automatically done based on lyrics only. Figure. 1. Russell Taxonomy 507
3 2.2 Mood Classification using Audio Feature MIREX [ 4 ] is mood taxonomy, and it is a yearly evaluation assignment of music information retrieval systems and algorithm where the valence arousal and score are calculated for music using several regression Models [5, 13, 16]. In these paper, [7-11] have used Russell mood taxonomy for audio based classification framework for Hindi music and shows that spectral features timbre features are promising features of audio. Also, there are other significant features is rhythm, pitch, intensity. 2.3 Mood Classification from Lyric Features Several classification tasks were conceded on western music mood classification based on the bag of words, sentiment lexicons (sentiwordnet) and stylistic features of a text [5]. In Hindi music [3], used three types of sentiment lexicons, stylistic features and n-gram features combined for lyrics based classification task. In our work, only text stylistic features are used as a feature set for classification purpose because till now no senti word net is available for Kokborok. So in future, we have to build it manually and used as a feature set for the classification task. 2.4 Multi-modal Music Mood Classification Figure.2 Havner s Taxonomy Some researchers used audio and lyrics features combined to get the automatic multimodal model for mood classification of music for western music as far as concerned to Indian music, multimodal classification done on few languages only [7-11]. 3. Proposed Work Database creation of Christian Kokborok music Our mood classification task is for one of the regional language Kokborok and our dataset confined only Christian Kokborok music. We gathered 300 audio songs with their corresponding lyrics which 508
4 are from Kokborok Christian community and related to the Holy Bible. Songs are used in this experiment are of 30-sec clip because the survey observes that first 30-sec clip of any song has the most useful information. So for the computational purpose, we remove all the noise from audio files Mood Annotated Dataset For the mood classification task, it is necessary to create a ground truth set of data for audio as well as lyrics. As for our knowledge, there is no mood annotated dataset for, so we have to annotate the files manually. Two annotators who know Kokborok does our annotation. Annotation is done in coarse grain method for lyrics data and annotation is done by reading the lyrics only. For audio data, the annotation is based solely on the music not considered lyrics. 3.2 Taxonomy generation Mood taxonomy is used to express the feeling and the emotion regarding the song very firmly attached to it. As for our knowledge, no experiment is proposed to generate any taxonomy for Kokborok Christian song. So we adopted the subset from Havner s adjectives list. We observed in the initial observation that the adjectives in the Havner s list have fallen under the category where they can fit the database in the best way. Because songs having similar class have to be close to each other, and songs having different cluster have to be distinct from each other in the hyperplane of v-a. Table 1. Proposed mood taxonomy Class Happy Sad Calm Excited Sub-Class Cheerful Mournful Sacred Excited Merry Tragic Solemn Dramatic Joyous Pathetic Inspiring Aroused 4. Feature Selection 4.1 Features selection for Audio classification Feature extraction and selection in mood classification is an essential task for building a system by the literature survey [1,10,11], All the features are taken out by jaudio toolkit [19]. It is available publicly for research purpose and used by many researchers [2, 5, 6, 7, 13]. Timbre: The distinctive features of timbre have implemented for several researchers for music analysis. It is observed that MFCC features of Timber have been active features for music mood as well as a genre classification task. The spectral flux, spectral centroid, spectral shape, variability characteristics are essential for differentiation moods [2, 7, 11]. Intensity: it is an essential feature in mood detection. We consider the overall average root means square and fraction of low energy which is also used by [2, 3] for calculating the values of each feature. 509
5 Rhythm: Rhythm strength, Rhythm regularity, and tempo are related to people s mood response. From the literature review, it has been seen that rhythm is steady and balanced for happy music, sad usually slow and does not have a distinctive rhythm pattern [7, 11]. 4.2 Future selection for lyrical classification Text stylistic features are used effectively for classification of mood from lyrics of Western music [6] Some of the TS features, i.e. the total number of unique words, repeated words etc. used by [2, 8, 11] for Hindi music. We considered some of the TS features in our experiments are shown above in Table Classification result and evaluation For classification support, vector machine classifier (SVM) is used. Support vector machine and decision tree classifier are widely used for music classification purpose. Many researchers use irrespective of language these classifiers with high accuracy rate [2, 8-11] for audio based mood classification. Table 2. Features Used For Audio Classification Table 3. Features Used For Lyrics Classification Feature Class Features used Feature Description Timbre Intensity Rhythm Spectral Off, Spectral Variability, Macc's, Lpc's, Roll Partial Based Spectral Centroid, Root Means Square, Beat Histogram, Strongest Beat, Beat Sum, The strength of Strongest Beat, Zero Crossing, Feature Name Number (No.) words No. unique words No. repeated words No. of line of of of No. of the repeated line Feature Description Total no. of words in a lyric Total no. of unique words in a lyric Total no. of words in a lyric whose frequency is greater than 1 Total no. of line in a lyric Total no. of repeated line in a lyric No. unique line of Total no. of unique line in a lyric Weka is an open source machine learning tool that can use for classification task [1, 5, 6]. We incorporate SVM for mood classification. We tried various other algorithms also but do not give adequate results, so we choose LibSVM to be performed. In SVM, polynomial and radial basis function does not perform well. So, we showed with linear SVM and developed three particular systems. We faced lots of difficulties while annotating the song because previous surveys or any resource are not available for this language and also there are mood changes while annotating the audio and reading the lyrics. And as it is mentioned that the songs belong to the Holy Bible, many songs have similar kind of emotions and confusion creates between classes like calm or happy and subclasses between sacred or sad. 510
6 That is why we have sorted our data set up to 300 songs selectively in the case of western and Hindi music classification task based on lyrics observed a maximum of 80-90% and 50-75% [2, 7-11]. To the best of our knowledge, still, there is no work has been carried out on Kokborok song classification. So, we present a baseline system for a mood classification system for audio and lyrics and multimodal data. For lyrics classification, Lack of sentiment lexicons leads to comparatively less accuracy rate. One of the reasons may be that in the Bible, there is decidedly fewer variations are there (the perspective of instruments and singer), and majorities are devotional songs dedicated to Lord Jesus Christ. It is observed that the mood over the whole song may be different by annotators prospective. We initially classify audio by 48 parameters, but it does not work well for Kokborok music. Some of the parameters do not create any impact on classification result, we get 49% accuracy rate by LibSVM. So we select only those parameters which are significant changes in each class. We observed that only MFcc s s, Spectral Centroid, Strongest Beat, Beat Sum, Peak Based Spectral Smoothness, Zero Crossing are significant changes as classification result gets affected by those parameters only. 5.1 Classification System Evaluation in Weka 3.5 It is necessary to have an enormous amount of mood annotated data for implement on a statistical model for the good results. Since this work is initially started, so the number of songs is less compared to western and other Indian languages. The mood classification has been performed using LibSVM classifier according to the features we have set. We used WEKA API for building our classification model. In table 4(b), we can see the actual values and the predicted values for the audio classification system. The bold diagonal elements in each column represent the correctly predicted values. So the accuracy of the system is calculated by 286 ( )/ total number of the song (300) *100 = 95%. Similarly, table 5(b) and Table 6(b) show the confusion matrix of lyrics based classification and multimodal classification respectively. Table 4(a), 5(a), 6(a) shows the precision-recall and F-measure of audio, lyrics and multimodal classification system. Table 4(a). Classification performance in weka Class Precision Recall F-measure Table 4(b). Confusion matrix for the audio- Predicted values Class Calm Excited Happy Sad Calm Excited Happy Sad Average Actual values Calm Excited Happy Sad Average accuracy rate 95% 5.2 Classification based on lyrics 511
7 Table 5(a). Classification system performance for lyrics Class Precision Recall F- measure Calm Table 5(b). Confusion matrix for lyrics based system Predicted values Class Calm Excited Happy Sad Calm Excited Happy Sad Average Actual values Excited Happy Sad Average accuracy rate 97% Table 6(a). Multimodal system performance Class Precision Recall Calm F- measure Table 6(b). Confusion matrix for multimodal system Predicted values Class Calm Excited Happy Sad Calm Excited Happy Sad Average Actual values Excited Happy Sad Average accuracy rate 96% 5.3 Comparison of different algorithms and system performance We used a different classifier to performed classification on the dataset for each of the three systems. From the figure.3(b), we can say that support vector machine with a linear kernel and decision tree classifier that is j48 algorithm gives averagely similar and better results compared to other algorithms. Figure. 3(a) Graphical representations of System performance with different algorithms 512
8 System Algorithms LibSVM J48 Naïve Bayes LibSVM Polynomial Kernel Audio Lyrics Table 3(b). System performance with different algorithms SMO Multimodal Conclusions In the recent work, the multimodal mood annotated database is developed for the research in music mood classification of Kokborok. Three classification system is designed by multimodal dataset. Audio based system gives accuracy rate of 0.95% and lyrics based classification system gives 0.97% accuracy rate and for multimodal we achieved maximum F measure of 0.97 by LibSVM (linear kernel). We observed the mood variants during the annotation of the songs separately for audio and lyrics dataset. There are decidedly fewer variations in Kokborok Christian song is found, and some of the reason may be that the same usage of instruments and the unavailability of many Kokborok singers. As audio features of a given song based on the instrumental variations too, so classification accuracy rate gets also affected. We show the comparison of system performances between three different systems and finally for the multimodal system and even using different classifier for each of the systems we can say that the LibSVM and j48 classifier both performed better on the above Christian Kokborok dataset. 7. Future Work We primarily considered music mood classification applications in future for Kokborok Christian music. We will work lyrics classification for various lyrical features, i.e. n-gram, bow, and sentiment lexicons. As for our knowledge, there is no sentiment lexicon available for Kokborok, so we will develop a sentiment word dictionary for Kokborok and explore all other possible features for lyrics based classification. In the multimodal system, we will study a more in-depth analysis of readers and listener's point of view. References [1]. Tian, Y., Wu, Q., & Yue, P. (2018). A comparison study of classification algorithms on the dataset using WEKA tool. Journal of Engineering Technology, 6(2), [2]. Patra, B. G., Das, D., & Bandyopadhyay, S. Mood classification of Hindi songs based on lyrics In Proceedings of the 12th International Conference on Natural Language Processing, pp. ( ),
9 [3]. Patra, B. G., Das, D., & Bandyopadhyay, S. Retrieving Similar Lyrics for Music Recommendation System In 14th International Conference on Natural Language Processing, PP. (48-52), ICON, December [4]. Downie, X. H. J. S., Cyril Laurier, and M. B. A. F. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned In Proc. 9th Int. Conf. Music Inf. Retrieval, pp. ( ), [5]. Joshi, A. Balamurali, R. and Bhattacharyya P. A fall-back strategy for sentiment analysis in Hindi: a case study In Proc. Of the 8th International Conference on Natural Language Processing, (ICON -2010). [6]. Aniruddha M. Ujlambkar and Vahida Z. Attar Mood classification of Indian popular music In Proc. of the CUBE International Information Technology Conference, pp-( ), ACM. [7]. Patra, B. G., Das, D., & Bandyopadhyay, S. Automatic music mood classification of Hindi songs In Proc. of 3rd Workshop on Sentiment Analysis where AI meets Psychology PP. (24-28), IJCNLP 2013a. [8]. Patra, B. G., Das, D., & Bandyopadhyay, S. Multimodal mood classification framework for Hindi songs in Computacin y Sistemas, vol-20(3), PP.( ), [9]. Patra, B. G., Das, D., & Bandyopadhyay, S. Unsupervised approach to Hindi music mood classification In Mining Intelligence and Knowledge Exploration, pp. (62-69), Springer International Publishing, 2013b. [10]. Patra, B. G., Das, D., & Bandyopadhyay, S. Multimodal mood Classification-a case study of differences in Hindi and western songs" In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp.( ), [11]. Patra, B. G., Das, D., & Bandyopadhyay, S. Labeling data and developing a supervised framework for Hindi music mood analysis" in Journal of Intelligent Information Systems, vol- 48(3), p-( ), [12]. Laurier, Cyril. Mohamed Sordo, Joan Serra and Perfecto Herrera. Music mood representations from social tags In Proc. of the ISMIR, pp. ( ), [13]. Patra, B. G., Das, D., Maitra, P & Bandyopadhyay, S. Feed- Forward Neural Network based Music Emotion Recognition In MediaEval Workshop, September 14-15, [14]. Banerjee, S. A Survey of Prospects and Problems in Hindustani Classical Raga Identification Using Machine Learning Techniques In Proceedings of the First International Conference on Intelligent Computing and Communication, pp.( ), Springer, Singapore, [15]. Makarand R. Velankar and Hari V. Sahasrabuddhe. A pilot study of Hindustani music sentiments In Proc. of 2nd Workshop on Sentiment Analysis where AI meets Psychology India, pages- (91-98), IIT Bombay, Mumbai, COLING [16]. Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. Emotionally-relevant features for classification and regression of music lyrics in IEEE Transactions on Affective Computing, vol- (2), p-( ), [17]. Yang, D., & Lee, W. S. Music emotion identification from lyrics In Multimedia, ISM th IEEE International Symposium, pp -( ), IEEE (2009, December). [18]. James A. Russell. A Circumplex Model of Affect" In Journal of Personality and Social Psychology, vol- 39(6), p-( ),
10 [19]. McKay, C., Fujinaga, I., & Depalle, P. (2005). jaudio: A feature extraction library. In Proceedings International Society for Music Information Retrieval (ISMIR) (pp ). [20]. E. and M. Morisio," Moody lyrics: A sentiment annotated lyrics dataset, In Proceedings of the 2017 International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, ISMSI, pp- (118124), ACM, Hong Kong (March 2017). [21]. Wasim, M., Chaudary, M. H., & Iqbal, M. (2018). Towards an Internet of Things (IoT) based Big Data Analytics in Journal of Engineering Technology, 6(2), [22].Degaonkar, V. N., & Kulkarni, A. V. (2018). Automatic raga identification in Indian classical music using the Convolution Neural Network in Journal of Engineering Technology, 6(2), [23]. Kumar, K. R., Santosh, D. T., Vardhan, B. V., & Chiranjeevi, P. "Machine learning in the computational treatment of opinions towards better product recommendations an ontology mining way: a survey" in Journal of Engineering Technology, 6(2), [24]. Al-Barhamtoshy, H. M., & Abdou, S. (2018). Arabic OCR Metricsbased Evaluation Model in Journal of Engineering Technology, 6(1), [25]. Collection of Some Kokborok songs Available: [26]. Detail about the Kokborok language. Available: [27]. Detail about Christianity religion in Tripura state, Available: 515
Multimodal Mood Classification Framework for Hindi Songs
Multimodal Mood Classification Framework for Hindi Songs Department of Computer Science & Engineering, Jadavpur University, Kolkata, India brajagopalcse@gmail.com, dipankar.dipnil2005@gmail.com, sivaji
More informationBi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset
Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset Ricardo Malheiro, Renato Panda, Paulo Gomes, Rui Paiva CISUC Centre for Informatics and Systems of the University of Coimbra {rsmal,
More informationMultimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs
Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs Braja Gopal Patra, Dipankar Das, and Sivaji Bandyopadhyay Department of Computer Science and Engineering, Jadavpur
More informationMusic Mood Classification - an SVM based approach. Sebastian Napiorkowski
Music Mood Classification - an SVM based approach Sebastian Napiorkowski Topics on Computer Music (Seminar Report) HPAC - RWTH - SS2015 Contents 1. Motivation 2. Quantification and Definition of Mood 3.
More informationMusic Emotion Recognition. Jaesung Lee. Chung-Ang University
Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or
More informationMUSI-6201 Computational Music Analysis
MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)
More informationMusic Mood. Sheng Xu, Albert Peyton, Ryan Bhular
Music Mood Sheng Xu, Albert Peyton, Ryan Bhular What is Music Mood A psychological & musical topic Human emotions conveyed in music can be comprehended from two aspects: Lyrics Music Factors that affect
More informationLyric-Based Music Mood Recognition
Lyric-Based Music Mood Recognition Emil Ian V. Ascalon, Rafael Cabredo De La Salle University Manila, Philippines emil.ascalon@yahoo.com, rafael.cabredo@dlsu.edu.ph Abstract: In psychology, emotion is
More informationAutomatic Music Clustering using Audio Attributes
Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,
More informationMood Tracking of Radio Station Broadcasts
Mood Tracking of Radio Station Broadcasts Jacek Grekow Faculty of Computer Science, Bialystok University of Technology, Wiejska 45A, Bialystok 15-351, Poland j.grekow@pb.edu.pl Abstract. This paper presents
More informationABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC
ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk
More informationLyrics Classification using Naive Bayes
Lyrics Classification using Naive Bayes Dalibor Bužić *, Jasminka Dobša ** * College for Information Technologies, Klaićeva 7, Zagreb, Croatia ** Faculty of Organization and Informatics, Pavlinska 2, Varaždin,
More informationSinger Traits Identification using Deep Neural Network
Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic
More informationExploring Relationships between Audio Features and Emotion in Music
Exploring Relationships between Audio Features and Emotion in Music Cyril Laurier, *1 Olivier Lartillot, #2 Tuomas Eerola #3, Petri Toiviainen #4 * Music Technology Group, Universitat Pompeu Fabra, Barcelona,
More informationA Categorical Approach for Recognizing Emotional Effects of Music
A Categorical Approach for Recognizing Emotional Effects of Music Mohsen Sahraei Ardakani 1 and Ehsan Arbabi School of Electrical and Computer Engineering, College of Engineering, University of Tehran,
More informationAutomatic Piano Music Transcription
Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening
More informationInstrument Recognition in Polyphonic Mixtures Using Spectral Envelopes
Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu
More informationMELODY ANALYSIS FOR PREDICTION OF THE EMOTIONS CONVEYED BY SINHALA SONGS
MELODY ANALYSIS FOR PREDICTION OF THE EMOTIONS CONVEYED BY SINHALA SONGS M.G.W. Lakshitha, K.L. Jayaratne University of Colombo School of Computing, Sri Lanka. ABSTRACT: This paper describes our attempt
More informationWHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?
WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? NICHOLAS BORG AND GEORGE HOKKANEN Abstract. The possibility of a hit song prediction algorithm is both academically interesting and industry motivated.
More informationMINING THE CORRELATION BETWEEN LYRICAL AND AUDIO FEATURES AND THE EMERGENCE OF MOOD
AROUSAL 12th International Society for Music Information Retrieval Conference (ISMIR 2011) MINING THE CORRELATION BETWEEN LYRICAL AND AUDIO FEATURES AND THE EMERGENCE OF MOOD Matt McVicar Intelligent Systems
More informationjsymbolic 2: New Developments and Research Opportunities
jsymbolic 2: New Developments and Research Opportunities Cory McKay Marianopolis College and CIRMMT Montreal, Canada 2 / 30 Topics Introduction to features (from a machine learning perspective) And how
More informationDAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval
DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca
More informationDimensional Music Emotion Recognition: Combining Standard and Melodic Audio Features
Dimensional Music Emotion Recognition: Combining Standard and Melodic Audio Features R. Panda 1, B. Rocha 1 and R. P. Paiva 1, 1 CISUC Centre for Informatics and Systems of the University of Coimbra, Portugal
More informationSupervised Learning in Genre Classification
Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music
More informationMultimodal Sentiment Analysis of Telugu Songs
Multimodal Sentiment Analysis of Telugu Songs by Harika Abburi, Eashwar Sai Akhil, Suryakanth V Gangashetty, Radhika Mamidi Hilton, New York City, USA. Report No: IIIT/TR/2016/-1 Centre for Language Technologies
More informationALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists
ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists Eva Zangerle, Michael Tschuggnall, Stefan Wurzinger, Günther Specht Department of Computer Science Universität Innsbruck firstname.lastname@uibk.ac.at
More informationWHEN LYRICS OUTPERFORM AUDIO FOR MUSIC MOOD CLASSIFICATION: A FEATURE ANALYSIS
WHEN LYRICS OUTPERFORM AUDIO FOR MUSIC MOOD CLASSIFICATION: A FEATURE ANALYSIS Xiao Hu J. Stephen Downie Graduate School of Library and Information Science University of Illinois at Urbana-Champaign xiaohu@illinois.edu
More informationMulti-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis
Multi-Modal Music Emotion Recognition: A New Dataset, Methodology and Comparative Analysis R. Panda 1, R. Malheiro 1, B. Rocha 1, A. Oliveira 1 and R. P. Paiva 1, 1 CISUC Centre for Informatics and Systems
More informationAutomatic Music Genre Classification
Automatic Music Genre Classification Nathan YongHoon Kwon, SUNY Binghamton Ingrid Tchakoua, Jackson State University Matthew Pietrosanu, University of Alberta Freya Fu, Colorado State University Yue Wang,
More informationMood Classification Using Lyrics and Audio: A Case-Study in Greek Music
Mood Classification Using Lyrics and Audio: A Case-Study in Greek Music Spyros Brilis, Evagelia Gkatzou, Antonis Koursoumis, Karolos Talvis, Katia Kermanidis, Ioannis Karydis To cite this version: Spyros
More informationSubjective Similarity of Music: Data Collection for Individuality Analysis
Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp
More informationInternational Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC
Scientific Journal of Impact Factor (SJIF): 5.71 International Journal of Advance Engineering and Research Development Volume 5, Issue 04, April -2018 e-issn (O): 2348-4470 p-issn (P): 2348-6406 MUSICAL
More informationLarge scale Visual Sentiment Ontology and Detectors Using Adjective Noun Pairs
Large scale Visual Sentiment Ontology and Detectors Using Adjective Noun Pairs Damian Borth 1,2, Rongrong Ji 1, Tao Chen 1, Thomas Breuel 2, Shih-Fu Chang 1 1 Columbia University, New York, USA 2 University
More informationResearch & Development. White Paper WHP 232. A Large Scale Experiment for Mood-based Classification of TV Programmes BRITISH BROADCASTING CORPORATION
Research & Development White Paper WHP 232 September 2012 A Large Scale Experiment for Mood-based Classification of TV Programmes Jana Eggink, Denise Bland BRITISH BROADCASTING CORPORATION White Paper
More informationAutomatic Rhythmic Notation from Single Voice Audio Sources
Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung
More informationarxiv: v1 [cs.ir] 16 Jan 2019
It s Only Words And Words Are All I Have Manash Pratim Barman 1, Kavish Dahekar 2, Abhinav Anshuman 3, and Amit Awekar 4 1 Indian Institute of Information Technology, Guwahati 2 SAP Labs, Bengaluru 3 Dell
More informationSemi-supervised Musical Instrument Recognition
Semi-supervised Musical Instrument Recognition Master s Thesis Presentation Aleksandr Diment 1 1 Tampere niversity of Technology, Finland Supervisors: Adj.Prof. Tuomas Virtanen, MSc Toni Heittola 17 May
More informationSome Experiments in Humour Recognition Using the Italian Wikiquote Collection
Some Experiments in Humour Recognition Using the Italian Wikiquote Collection Davide Buscaldi and Paolo Rosso Dpto. de Sistemas Informáticos y Computación (DSIC), Universidad Politécnica de Valencia, Spain
More informationTHE POTENTIAL FOR AUTOMATIC ASSESSMENT OF TRUMPET TONE QUALITY
12th International Society for Music Information Retrieval Conference (ISMIR 2011) THE POTENTIAL FOR AUTOMATIC ASSESSMENT OF TRUMPET TONE QUALITY Trevor Knight Finn Upham Ichiro Fujinaga Centre for Interdisciplinary
More informationINTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION
INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for
More informationImproving Frame Based Automatic Laughter Detection
Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for
More informationGENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA
GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA Ming-Ju Wu Computer Science Department National Tsing Hua University Hsinchu, Taiwan brian.wu@mirlab.org Jyh-Shing Roger Jang Computer
More informationAutomatic Laughter Detection
Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional
More informationPredicting Time-Varying Musical Emotion Distributions from Multi-Track Audio
Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio Jeffrey Scott, Erik M. Schmidt, Matthew Prockup, Brandon Morton, and Youngmoo E. Kim Music and Entertainment Technology Laboratory
More informationA Large Scale Experiment for Mood-Based Classification of TV Programmes
2012 IEEE International Conference on Multimedia and Expo A Large Scale Experiment for Mood-Based Classification of TV Programmes Jana Eggink BBC R&D 56 Wood Lane London, W12 7SB, UK jana.eggink@bbc.co.uk
More informationTOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC
TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu
More informationComposer Identification of Digital Audio Modeling Content Specific Features Through Markov Models
Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Aric Bartle (abartle@stanford.edu) December 14, 2012 1 Background The field of composer recognition has
More informationPOLITECNICO DI TORINO Repository ISTITUZIONALE
POLITECNICO DI TORINO Repository ISTITUZIONALE MoodyLyrics: A Sentiment Annotated Lyrics Dataset Original MoodyLyrics: A Sentiment Annotated Lyrics Dataset / Çano, Erion; Morisio, Maurizio. - ELETTRONICO.
More informationA Survey on: Sound Source Separation Methods
Volume 3, Issue 11, November-2016, pp. 580-584 ISSN (O): 2349-7084 International Journal of Computer Engineering In Research Trends Available online at: www.ijcert.org A Survey on: Sound Source Separation
More informationCan Song Lyrics Predict Genre? Danny Diekroeger Stanford University
Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University danny1@stanford.edu 1. Motivation and Goal Music has long been a way for people to express their emotions. And because we all have a
More informationCombination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections
1/23 Combination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections Rudolf Mayer, Andreas Rauber Vienna University of Technology {mayer,rauber}@ifs.tuwien.ac.at Robert Neumayer
More informationAutomatic Extraction of Popular Music Ringtones Based on Music Structure Analysis
Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Fengyan Wu fengyanyy@163.com Shutao Sun stsun@cuc.edu.cn Weiyao Xue Wyxue_std@163.com Abstract Automatic extraction of
More informationMusic Genre Classification
Music Genre Classification chunya25 Fall 2017 1 Introduction A genre is defined as a category of artistic composition, characterized by similarities in form, style, or subject matter. [1] Some researchers
More informationCoimbra, Coimbra, Portugal Published online: 18 Apr To link to this article:
This article was downloaded by: [Professor Rui Pedro Paiva] On: 14 May 2015, At: 03:23 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:
More informationVECTOR REPRESENTATION OF EMOTION FLOW FOR POPULAR MUSIC. Chia-Hao Chung and Homer Chen
VECTOR REPRESENTATION OF EMOTION FLOW FOR POPULAR MUSIC Chia-Hao Chung and Homer Chen National Taiwan University Emails: {b99505003, homer}@ntu.edu.tw ABSTRACT The flow of emotion expressed by music through
More informationChord Classification of an Audio Signal using Artificial Neural Network
Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationEVALUATING THE GENRE CLASSIFICATION PERFORMANCE OF LYRICAL FEATURES RELATIVE TO AUDIO, SYMBOLIC AND CULTURAL FEATURES
EVALUATING THE GENRE CLASSIFICATION PERFORMANCE OF LYRICAL FEATURES RELATIVE TO AUDIO, SYMBOLIC AND CULTURAL FEATURES Cory McKay, John Ashley Burgoyne, Jason Hockman, Jordan B. L. Smith, Gabriel Vigliensoni
More informationClassification of Musical Instruments sounds by Using MFCC and Timbral Audio Descriptors
Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Descriptors Priyanka S. Jadhav M.E. (Computer Engineering) G. H. Raisoni College of Engg. & Mgmt. Wagholi, Pune, India E-mail:
More informationAnalytic Comparison of Audio Feature Sets using Self-Organising Maps
Analytic Comparison of Audio Feature Sets using Self-Organising Maps Rudolf Mayer, Jakob Frank, Andreas Rauber Institute of Software Technology and Interactive Systems Vienna University of Technology,
More informationCategorization of ICMR Using Feature Extraction Strategy And MIR With Ensemble Learning
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 57 (2015 ) 686 694 3rd International Conference on Recent Trends in Computing 2015 (ICRTC-2015) Categorization of ICMR
More informationThe Role of Time in Music Emotion Recognition
The Role of Time in Music Emotion Recognition Marcelo Caetano 1 and Frans Wiering 2 1 Institute of Computer Science, Foundation for Research and Technology - Hellas FORTH-ICS, Heraklion, Crete, Greece
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationToward Multi-Modal Music Emotion Classification
Toward Multi-Modal Music Emotion Classification Yi-Hsuan Yang 1, Yu-Ching Lin 1, Heng-Tze Cheng 1, I-Bin Liao 2, Yeh-Chin Ho 2, and Homer H. Chen 1 1 National Taiwan University 2 Telecommunication Laboratories,
More informationA Survey Of Mood-Based Music Classification
A Survey Of Mood-Based Music Classification Sachin Dhande 1, Bhavana Tiple 2 1 Department of Computer Engineering, MIT PUNE, Pune, India, 2 Department of Computer Engineering, MIT PUNE, Pune, India, Abstract
More informationDetecting Musical Key with Supervised Learning
Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different
More informationHeadings: Machine Learning. Text Mining. Music Emotion Recognition
Yunhui Fan. Music Mood Classification Based on Lyrics and Audio Tracks. A Master s Paper for the M.S. in I.S degree. April, 2017. 36 pages. Advisor: Jaime Arguello Music mood classification has always
More informationAutomatic Laughter Detection
Automatic Laughter Detection Mary Knox 1803707 knoxm@eecs.berkeley.edu December 1, 006 Abstract We built a system to automatically detect laughter from acoustic features of audio. To implement the system,
More informationIndexing Music by Mood: Design and Integration of an Automatic Content-based Annotator
Indexing Music by Mood: Design and Integration of an Automatic Content-based Annotator Cyril Laurier, Owen Meyers, Joan Serrà, Martin Blech, Perfecto Herrera and Xavier Serra Music Technology Group, Universitat
More informationSpeech Recognition Combining MFCCs and Image Features
Speech Recognition Combining MFCCs and Image Featres S. Karlos from Department of Mathematics N. Fazakis from Department of Electrical and Compter Engineering K. Karanikola from Department of Mathematics
More informationAutomatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines
Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines Cyril Laurier, Perfecto Herrera Music Technology Group Universitat Pompeu Fabra Barcelona, Spain {cyril.laurier,perfecto.herrera}@upf.edu
More informationPerceptual dimensions of short audio clips and corresponding timbre features
Perceptual dimensions of short audio clips and corresponding timbre features Jason Musil, Budr El-Nusairi, Daniel Müllensiefen Department of Psychology, Goldsmiths, University of London Question How do
More informationRelease Year Prediction for Songs
Release Year Prediction for Songs [CSE 258 Assignment 2] Ruyu Tan University of California San Diego PID: A53099216 rut003@ucsd.edu Jiaying Liu University of California San Diego PID: A53107720 jil672@ucsd.edu
More informationThe Million Song Dataset
The Million Song Dataset AUDIO FEATURES The Million Song Dataset There is no data like more data Bob Mercer of IBM (1985). T. Bertin-Mahieux, D.P.W. Ellis, B. Whitman, P. Lamere, The Million Song Dataset,
More informationMusic Emotion Classification based on Lyrics-Audio using Corpus based Emotion
International Journal of Electrical and Computer Engineering (IJECE) Vol. 8, No. 3, June 2018, pp. 1720~1730 ISSN: 2088-8708, DOI: 10.11591/ijece.v8i3.pp1720-1730 1720 Music Emotion Classification based
More informationAutomatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting
Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Dalwon Jang 1, Seungjae Lee 2, Jun Seok Lee 2, Minho Jin 1, Jin S. Seo 2, Sunil Lee 1 and Chang D. Yoo 1 1 Korea Advanced
More informationMelody classification using patterns
Melody classification using patterns Darrell Conklin Department of Computing City University London United Kingdom conklin@city.ac.uk Abstract. A new method for symbolic music classification is proposed,
More informationA Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models
A Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models Xiao Hu University of Hong Kong xiaoxhu@hku.hk Yi-Hsuan Yang Academia Sinica yang@citi.sinica.edu.tw ABSTRACT
More informationA QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM
A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr
More informationMODELING MUSICAL MOOD FROM AUDIO FEATURES AND LISTENING CONTEXT ON AN IN-SITU DATA SET
MODELING MUSICAL MOOD FROM AUDIO FEATURES AND LISTENING CONTEXT ON AN IN-SITU DATA SET Diane Watson University of Saskatchewan diane.watson@usask.ca Regan L. Mandryk University of Saskatchewan regan.mandryk@usask.ca
More informationMusic Information Retrieval with Temporal Features and Timbre
Music Information Retrieval with Temporal Features and Timbre Angelina A. Tzacheva and Keith J. Bell University of South Carolina Upstate, Department of Informatics 800 University Way, Spartanburg, SC
More informationIMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC
IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC Ashwin Lele #, Saurabh Pinjani #, Kaustuv Kanti Ganguli, and Preeti Rao Department of Electrical Engineering, Indian
More informationPattern Based Melody Matching Approach to Music Information Retrieval
Pattern Based Melody Matching Approach to Music Information Retrieval 1 D.Vikram and 2 M.Shashi 1,2 Department of CSSE, College of Engineering, Andhra University, India 1 daravikram@yahoo.co.in, 2 smogalla2000@yahoo.com
More informationA PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES
12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou
More informationNeural Network for Music Instrument Identi cation
Neural Network for Music Instrument Identi cation Zhiwen Zhang(MSE), Hanze Tu(CCRMA), Yuan Li(CCRMA) SUN ID: zhiwen, hanze, yuanli92 Abstract - In the context of music, instrument identi cation would contribute
More informationMulti-modal Analysis of Music: A large-scale Evaluation
Multi-modal Analysis of Music: A large-scale Evaluation Rudolf Mayer Institute of Software Technology and Interactive Systems Vienna University of Technology Vienna, Austria mayer@ifs.tuwien.ac.at Robert
More informationOutline. Why do we classify? Audio Classification
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify
More informationA Music Retrieval System Using Melody and Lyric
202 IEEE International Conference on Multimedia and Expo Workshops A Music Retrieval System Using Melody and Lyric Zhiyuan Guo, Qiang Wang, Gang Liu, Jun Guo, Yueming Lu 2 Pattern Recognition and Intelligent
More informationReducing False Positives in Video Shot Detection
Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran
More informationMusical Hit Detection
Musical Hit Detection CS 229 Project Milestone Report Eleanor Crane Sarah Houts Kiran Murthy December 12, 2008 1 Problem Statement Musical visualizers are programs that process audio input in order to
More informationSTRING QUARTET CLASSIFICATION WITH MONOPHONIC MODELS
STRING QUARTET CLASSIFICATION WITH MONOPHONIC Ruben Hillewaere and Bernard Manderick Computational Modeling Lab Department of Computing Vrije Universiteit Brussel Brussels, Belgium {rhillewa,bmanderi}@vub.ac.be
More informationBilbo-Val: Automatic Identification of Bibliographical Zone in Papers
Bilbo-Val: Automatic Identification of Bibliographical Zone in Papers Amal Htait, Sebastien Fournier and Patrice Bellot Aix Marseille University, CNRS, ENSAM, University of Toulon, LSIS UMR 7296,13397,
More informationDrum Stroke Computing: Multimodal Signal Processing for Drum Stroke Identification and Performance Metrics
Drum Stroke Computing: Multimodal Signal Processing for Drum Stroke Identification and Performance Metrics Jordan Hochenbaum 1, 2 New Zealand School of Music 1 PO Box 2332 Wellington 6140, New Zealand
More informationMUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC
12th International Society for Music Information Retrieval Conference (ISMIR 2011) MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC Sam Davies, Penelope Allen, Mark
More informationWorld Journal of Engineering Research and Technology WJERT
wjert, 2018, Vol. 4, Issue 4, 218-224. Review Article ISSN 2454-695X Maheswari et al. WJERT www.wjert.org SJIF Impact Factor: 5.218 SARCASM DETECTION AND SURVEYING USER AFFECTATION S. Maheswari* 1 and
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationComputational Modelling of Harmony
Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond
More informationSinger Recognition and Modeling Singer Error
Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing
More informationA Survey of Audio-Based Music Classification and Annotation
A Survey of Audio-Based Music Classification and Annotation Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang IEEE Trans. on Multimedia, vol. 13, no. 2, April 2011 presenter: Yin-Tzu Lin ( 阿孜孜 ^.^)
More informationMUSICAL INSTRUMENTCLASSIFICATION USING MIRTOOLBOX
MUSICAL INSTRUMENTCLASSIFICATION USING MIRTOOLBOX MS. ASHWINI. R. PATIL M.E. (Digital System),JSPM s JSCOE Pune, India, ashu.rpatil3690@gmail.com PROF.V.M. SARDAR Assistant professor, JSPM s, JSCOE, Pune,
More informationMODELS of music begin with a representation of the
602 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 Modeling Music as a Dynamic Texture Luke Barrington, Student Member, IEEE, Antoni B. Chan, Member, IEEE, and
More information