Lyrical Features of Popular Music of the 20th and 21st Centuries: Distinguishing by Decade

Size: px
Start display at page:

Download "Lyrical Features of Popular Music of the 20th and 21st Centuries: Distinguishing by Decade"

Transcription

1 Lyrical Features of Popular Music of the 20th and 21st Centuries: Distinguishing by Decade Cody Stocker, Charlotte Munger, Ben Hannel December 16, Introduction Music has been called the voice of a generation. Here, we try to use that voice to predict which generation - decade - the song came from. It seems logically intuitive that a song that includes the world fuckin is a recent song, say 2000 or 2010, while a song that has the name Ethel in it is likely much older (1940s). The impact of other features is less clear: is rhyming more common in recent songs or older ones, and do rhyme schemes vary with decade? Relying on lyric-based features, we attempt to classify songs according to the era they come from. 2 Literature Review Various experiments have been conducted on humans ability to recognize the release decade of music and the ability of lyrical features to predict song traits. For instance, Carol Krumhansl conducted an experiment where she played clips of music to test subjects who were able to recall decade and other information fairly readily after a few seconds. Results from this study indicate that participants were able to identify the decade of popular songs about 80% of the time, which is highly significant [Kum10]. Xiao Hu conducted a study comparing the performance of lyric-based mood classifications to audio based ones. Of the 18 mood categories in the experiment, lyric based classification significantly outperformed audio based classification on 7 of them and on only one category did audio features significantly outperform lyric features. While this could imply that lyric based classifications are superior, every single one significantly underperformed audio based classifications on negative valence negative arousal states (like calmness) [DH10]. Mayer, Newmayer, and Rauber used rhyme and style features to try to classify song genres. They were able to achieve 28.55% accuracy on a test set of 397 songs, per each of the 10 song genres they were classifying using K-nearest neighbors, and 27.58% by using Naieve Bayes. [MNR08] 3 Task Definition 3.1 Input The input of our classifier is the full text lyrics of a song. The artist and title are not provided. Here are some example inputs: I m dreaming of a white Christmas Just like the ones I used to know... <full song lyrics> Tried to keep you close to me, But life got in between... <full song lyrics> 3.2 Output The general goal of our algorithm is to predict when the lyrics of the song were written. However, we did not implement a continuous classifier because the features of songs clearly have not followed linear trends between 1940 and 2010, and we thought it would be easier to use a discrete multiclass approach. 1

2 Originally, we wanted to predict the exact decade in which a song was written, but it proved difficult to train a classifier with eight output classes for high accuracy. We used three label functions to run different tests on our classifiers. They were: Labeler: Anything before 1980 is category 0, everything after is category 1 2. Bi-Decade Labeler: Groups decades in pairs, i.e. 1940/1950, 1960/ Decade Labeler: Groups song by decade (i.e s, 1950 s, 1960 s, 1970 s, 1980 s, 1990 s, 2000 s, 2010 s) Naturally, The smaller the number of output labels, the higher our precision, accuracy, and F1 scores were. In all cases, our classifier performed significantly better than random chance but interesting the classifier improved relative to random chance as more classes were added. 4 Infrastructure Our data collection process began with a group of songs that we thought were a fairly even sampling of genre and time period. We used two different song lists for this. One is the Alltime Pop Classics top charts by year lists from 1940 onward. This list is comparatively small, but provides a very even sampling over time. The other song list was the Million Song Data set, restricted to only songs where lyrics were available. The Million Song Data set is larger, but songs are sparse prior to In order to partially compensate for this, the number of songs per single year was capped at 1000 so later years didn t dominate. Once we had our list of song titles and artists, we scraped the website Lyrically, which provides song lyrics for free. About 30% of songs were not found, especially among older years, exacerbating the skew in the data set. In the end, the smaller data set contained 5006 song/year pairs, and the larger data set contained Each data set was split into train (90%) and test (10%) subsets. 5 Approach 5.1 Feature Functions Unigrams, Bigrams, and Trigrams These are relatively simple linguistic features composed of single words, pairs of words, and triples of words. We also tried variants of these features. We used start and end tokens as well as bigrams and trigrams without start and end tokens. In general, we did not expand beyond these three because of sparsity of data. Even though quad-grams would have been possible, it is unlikely they would have been particularly good features because they would be very sparse and would have a tendency to allow for overfitting of the model Stemming Functions We also used a Porter Stemmer to stem all words in our dataset, and then ran unigrams, bigrams, trigrams, and our start and end token functions Stop Word Removal In order to speed up the classifiers, we also tried incorporating stop word removal. We found a list of the most common English stopwords at ranks.nl and used two versions, one, the complete version, and the other with pronouns removed Length Multiple features involved song length. We used number of words per song, average number of words per line, and average number of syllables per line*, average number of sounds per line*. *As determined by the Carnegie Mellon University (CMU) pronouncing dictionary, which provides word pronunciations based on sound and syllable stress. 2

3 5.1.5 Rhyme Schemes Rhyme scheme is a high level musical feature which specifies the pattern of rhyming lines in a song. A song can be constructed such that adjacent lines rhyme, alternating lines rhyme, or some other pattern. In order to capture this as a feature, we used the CMU dictionary to determine if the final words of any two lines rhymed, based on the syllables in the word and their stresses. We then counted the number of such rhyming pairs where two consecutive lines rhymed (AA), where two lines separated by one line rhymed (ABA), and where any one of the next three lines rhymed (e.g. ABCBA has two rhyming lines). All of these schemes were reliant on the CMU dictionary which is not entirely complete, and counted words as rhymes if any recognized pronunciation of a word rhymed (though we have no guarantee that this was the intended pronunciation in the song, it seems likely to think it was). 5.2 Classifiers We tried multiple types of linear classifiers provided by SKLearn: 1. Multinomial Naive Bayes 2. Logistic Regression 3. Logistic Regression with L1 Regularization 4. Logistic Regression with L2 Regularization 5. Support Vector Machine (SVM) 5.3 Oracle Our Oracle was one of our group members who hand classified fifteen songs per decade based solely on their lyrics. The results of our Oracle s work is below: label Overall precision recall f Our Oracle did better than random chance on every task, and best on decades closest to the present. The Oracle did not have access to the title and artist of the songs (neither does the classifier), and it is likely that both the oracle and classifier would perform better given this data. 5.4 Baseline Our baseline was implemented with a Multinomial Naive Bayes classifier using a unigram feature function. The results are below: label overall precision recall f Results and Analysis 6.1 Classifiers Overall, we used five classifiers, four of which we were able to run on our subset of the Million Song Dataset. We were not able to run the SVM on the MSD sample because it took over a week to run our feature functions on a two class classifier Labeler Classifier Multinomial Naive Bayes Logistic Logistic L1 Logistic L2 Average Maximum Bi-Decade Labeler Classifier Multinomial Naive Bayes Logistic Logistic L1 Logistic L2 Average Max

4 Decade Labeler Classifier Multinomial Naive Bayes Logistic Logistic L1 Logistic L2 Average Max The SVM marginally outperformed the other classifiers on the small dataset with a two class classifier but did substantially worse on the multiclass classifications. Combined with the increased runtimes, it was not worth running the SVM on the larger dataset. Overall, the Multinomial Naive Bayes classifier and the Logistic Regression with L2 smoothing did the best, achieving exactly equal maximum scores over all three labeler functions, which is fairly impressive considering the basic probabalistic approach of the Multinomial Naive Bayes compared to the logistic regressions. Additionally, the Multinomial Naive Bayes ran far faster than the Logistic Regressions, so overall it was the best classifier. 6.2 Feature Functions We had several types of feature functions, each with different focuses, benefits, and drawbacks Unigrams, Bigrams, and Trigrams Unigram, bigrams, and trigrams were ultimately the most successful features we used and the first we tried. Over all classifiers, they were able to achieve the best results. For all classifiers, the best feature functions and F1 scores are as follows: 1. Multinomial Naive Bayes (a) 50-50: Unigrams, 0.73 (b) Bi-decade: Unigrams, 0.58 (c) Decade: Unigrams, Logistic Regression (a) 50-50: Combined Function, 0.73 (b) Bi-decade: Combined Function, 0.58 (c) Decade: Combined Function, Logistic L1 (a) 50-50: Combined Function, 0.71 (b) Bi-decade: Unigrams and Combined Function, 0.55 (c) Decade: Unigrams, Logistic L2 (a) 50-50: Combined Function, 0.73 (b) Bi-decade: Combined Function, 0.58 (c) Decade: Combined Function, SVM (with small dataset) (a) 50-50: Combined Function, 0.80 (b) Bi-decade: Combined Function, 0.53 (c) Decade: Unigrams and Combined Function, 0.39 These functions did at least even to and often better than our Oracle, and unigrams provided a solid baseline that was seldom outperformed with the data we had. 4

5 6.2.2 Stemming Functions Stemming did not significantly alter performance, and when it did, it usually caused classifiers to perform slightly worse. For example, on the SVM it never yielded more than a 2-percentage point difference in precision, recall, or F1 score on any feature for any labeler. Multinomial Naive Bayes Classifier on Labeler Feature Function Stemmed Unstemmed Unigrams Bigrams Trigrams Bigrams with Tokens Trigrams with Tokens Combined In the Multinomial Naive Bayes case, the classifier did on average about the same with stemmed and unstemmed functions. On average over all classifiers and all labelers, the stemmed feature functions received about lower F1 scores than the unstemmed feature functions. However, this result could be because of the way the Porter Stemmer works. The Porter Stemmer is not designed to handle slang, like "workin " or "drivin ", which could explain the decrease in performance. It would be helpful to have a stemmer that could handle the variety of slang that music threw at it Stop Word Removal Our two stop lists did not increase performance, despite our hopes, nor did they improve run time significantly, likely because we did not pre-process the files. Perhaps preprocessing the lyrics would improve the feature functions. Our two stop lists were derived from the same source, although the partial list removed all pronouns from the stop lists. Multinomial Naive Bayes Classifier on Labeler Feature Function Full Text Partial Stop Full Stop Unigrams Bigrams Trigrams Bigrams with Tokens Trigrams with Tokens Combined As seen in the chart, removing more stop words actually decreased performance, even though through word count we were able to see that the words were actually incredibly common. The effect was fairly marginal, so it is unclear if it was merely our dataset or our stop word choice. Future work could focus on finding a stop word list more representative of music, or combining stop words with a music specific stemmer Length Features Length features underperform relative to traditional NLP ones, but usually outperform random chance. They performed best on the linear-kernel based SVM for all labelers. On 50/50 and bi-decade, overall line length (total wordcount) significantly outperformed average number of words per line. Both average syllables and average sounds per line were decidedly bad features, on their own, only able to beat random chance on the individual decade labeler and not by any significant margin. The best performance - best on all but bidecade, where it was outdone by pure wordcount - came from a combination of average line length, total lines total num words, and total number of words, presumably because this allowed the dependencies between the features (more lines tends to signify lower average line length). With this said, wordcount and combined performed quite similarly indicating that wordcount was probably a heavily weighted feature within combined. The below chart details results from the small dataset, because running the SVM on the larger one was not feasible. Red indicates that the feature performed worse than random chance. 5

6 Feature Labeler Logistic F1 Logistic L1 F1 SVM F1 Wordcount 50/ Wordcount bidecade Wordcount individual decade Avg Wordcount per line 50/ Avg Wordcount per line bidecade Avg Wordcount per line individual decade Avg Sounds per line 50/ Avg syllables per line 50/ Combined length 50/ Combined length bidecade Combined length individual decade Rhyme schemes Unsurprisingly, combined rhyme - which included rhyming every and every other line, rhyming in the next of 3 lines, average syllables and sounds per line, and total num lines total num words - performed the best. With this said, as before with length features, rhyme features individually were not standout. They tended to do passably on 50/50 - all beat random chance - but not so well on later labelers: for example, for bidecade, only and combined beat random chance. 6.3 Feature Analysis Features Labeler Logistic F1 Logistic L1 F1 SVM Combined Rhyming 50/ Combined Rhyming bi-decade Combined Rhyming individual decade The most predictive features according to each decade reveal interesting trends in popular music of the times. In many ways, they are somewhat predictable; "Tutti Frutti" was more common in the 1940s-70s while "hoes" and "turn it up" are more common in the 1980s onwards. However, the lack of examples from early decades becomes readily apparent through the features in the bi-decade labeler and the individual decade labeler. For instance "Deacon Jones", while likely unique to the 1940s, is probably not emblematic of the era. The lack of examples stems from the website we used for scraping and the Million Song Dataset; the MSD is skewed towards modern songs, and the songs that made it onto our scraping website are added by community. Here, we can see songs that are brought back by nostalgia or video game references: "Santa Claus" is a strong 1940s feature, which should not be surprising as Christmas music tends to be timeless. However "that pistol down", "that pistol", "pistol down", and "Lay that", the other four top 1940s features are from a a Bing Crosby song called "Pistol Packin Mama", which was recently featured in Bethesda s video game, Fallout 4. We do see a shift in the general lyrics over time, and as people who grew up in the 2000s and 2010s, the lyrics "Party like a" and "We re up all" for 2000 and 2010 respectively seem to make sense, as does "stanky" and "legg". Some of the 1980s features seem almost stereotypical, like "Funkaela", while the 1960s had "boogety" and "hitch hike". Further feature analysis with a larger dataset would probably have better results getting the zietgeist of each decade, and with further data cleaning, we could probably more standardized features. We also observe that in eras following the 40 s and 50 s it is more common for a song to have a more strict, more frequent chorus line. The prediction values of the top-5 most predictive unigrams and bigrams for the 1940/1950 (the best bigram, for example, has a.4255 prediction value according to Bayes rule) pairing are significantly less than the top-5 predictive values for the other decade pairings (all of which are >.9 for bigrams: see tables in the appendix). While this could indicate that 40 s and 50 s choruses use the same words other eras do (thus making them not as predictive under Bayes), this likely isn t the case here: intuitively, "Rootie" and "Tootie" don t seem like words that would be frequent in more recent songs. 6.4 Error Analysis Improper data cleaning was the root cause of many of our issues. For instance, some of the most predictive features for the classifier were the features "Verse" and "<S> Verse", which are clearly human entered labels and not actually elements of the lyrics. Additionally, it could be helpful to stick everything in lowercase and remove punctuation characters, like apostraphes and quotation marks. Also, as in most cases, more data would be ideal. Because lyrics are copyrighted, it is difficult to get large quantities of verified data, and many websites are difficult to scrape. 6

7 References [DH10] J. Stephen Downie and Xiao Hu. When lyrics outperform audio for music mood classification: A feature analysis. In International Society for Music Information Retrieval Conference, number 11, [Kum10] Carol L. Kumhansl. Plink: thin slices of music. Music Perception: An Interdisciplinary Journal, 27(3): , [MNR08] Rudolph Mayer, Robert Neumayer, and Andraes Rauber. Rhyme and style features for musical genre classification by song lyrics. In ISMIR 2008: Proceedings of the 9th International Conference of Music Information Retrieval, number 9, Appendix 7.1 Most predictive words in Naive Bayes Classifier The following table is designed such that unigrams 1 indicates the most predictive (in regards to direct bayesian probability) unigrams word and unigrams 5 indicates the 5th best unigrams feature for the class at the top of the column. Prediction values are rounded, and in the case of a tie the word was chosen alphabetically. Combined phi is a combination of unigrams, bigrams with tokens, and trigrams with tokens. For 50/50 classifier Feature UNIGRAMS unigrams 1 Tutti,.9864 niggaz,.9978 unigrams 2 boogety,.9862 tha,.9964 unigrams 3 Wages,.9860 hoes,.9963 unigrams 4 Sloopy,.9853 niggas,.9939 unigrams 5 Elis,.9838 nigga,.9934 BIGRAMS bigrams 1 Mr Lee,.9834 a nigga,.9965 bigrams 2 hitch hike,.9829 this shit,.9942 bigrams 3 the hump,.9826 yo yo,.9923 bigrams 4 Tutti Frutti,.9826 the fuck,.9913 bigrams 5 Los Wages,.9826 fuck with,.9912 TRIGRAMS trigrams 1 I want Thats,.9779 it Want it, trigrams 2 want Thats what,.9775 Want it Want,.9926 trigrams 3 over the hump, Jah la man,.9926 trigrams 4 star fucker star, turn it up,.9918 trigrams 5 Too much pressure,.9756 man Jah la,.9913 COMBINED combined 1 um um,.9926 nigga.9926 combined 2 um um um,.9903 u,.9917 combined 3 Mr Lee,.9897 Imma,.9896 combined 4 boogety,.9894 Verse,.9855 combined 5 <S> night,.9889 <S> Verse,

8 For paired decade labeler: Feature 1940/ / / /2010 UNIGRAMS unigrams 1 Rootie,.6030 boogety,.9698 Babba,.9725 niggaz,.9868 unigrams 2 Tootie,.5350 Elis,.9647 Wages,.9673 nigga, unigrams 3 Attorney,.4647 rutti,.9641 pegs, lai,.9740 unigrams 4 hobble,.4510 awimoweh,.9592 IceT,.9652 niggas,.9732 unigrams 5 District,.3946 ShooBop,.9538 Funkadelala,.9564 yuh, BIGRAMS bigrams 1 Deacon Jones,.2254 Mr Lee,.9550 Harlem Harlem,.9863 la man,.9826 bigrams 2 Rootie Tootie,.1704 hitch hike,.9537 Babba Do,.9798 Jah la, bigrams 3 happening everyday,.1128 oh rutti,.9451 ghetto The,.9790 a nigga, bigrams 4 Jones Deacon,.1128 frutti oh,.9451 good ooh,.9773 Lies Lies,.9810 bigrams 5 District Attorney,.1050 Simple Simon,.9366 mellow when,.9763 the ounce,.9796 TRIGRAMS trigrams 1 sho is hard,.0747 frutti oh rutti,.9261 ghetto The ghetto,.9812 Want it Want,.9847 trigram 2 things happening everyday,.0698 ahh ahh ahh,.9222 The ghetto The,.9798 Jah la man,.9847 trigrams 3 strange things happening, Tutti frutti oh,.9148 mellow when Im,.9786 man Jah la,.9820 trigrams 4 find me cryin,.0698 Simple Simon says,.9114 be mellow when,.9876 la man Jah,.9820 trigrams 5 are strange things,.0698 Mr Lee Mr,.9038 Ill be mellow,.9875 To the ounce,.9806 For individual decade labeler (split into two tables) Feature UNIGRAMS unigrams 1 Rootie,.3938 Banua,.8100 boogety,.9254 Wages,.9249 unigrams 2 Tootie,.3610 Diddy,.7076 Elis,.9134 steward,.8935 unigrams 3 Deacon,.2944 biga,.7031 awimoweh,.9007 HeyO,.8844 unigrams 4 hobble,.2875 Matelot,.6681 ShooBop,.8884 CM,.8792 unigrams 5 Attorney,.2707 Atell,.6030 hike,.8756 Neat,.8791 COMBINED combined 1 that pistol down,.4255 Mr Lee,.9259 um um,.9561 on up </S>,.9064 combined 2 that pistol,.4355 <S> night and,.9171 um um um,.9426 beat goes on,.9043 combined 3 pistol down,.4255 lama,.9118 boogety,.9378 Bennie,.9024 combined 4 Santa Claus,.3941 jungle jungle,.9041 hike,.9302 Get on up,.8992 combined 5 Lay that.3901 an around,.9038 Hitch hike,.9288 who who,.8955 Feature UNIGRAMS unigrams 1 IceT,.9172 lai,.9402 BANG,.9193 Amelle,.8038 unigrams 2 Funkadela,.8975 Babba,.9338 stanky,.9020 BOB,.7674 unigrams 3 Ludd,.8775 Mistah,.9196 legg,.9003 Vanderpool,.7393 unigrams 4 Undercover,.8696 Bart,.9078 shik,.8890 TUNING,.6830 unigrams 5 Antmusic,.8642 oie,.9012 Ziggy,.8807 seo,.6447 COMBINED combined 1 down on it,.9356 La La,.9187 da na,.8981 whooooo,.7814 combined 2 Get down on,.9339 The promise,.8949 Party like a,.8893 Were up all.7696 combined 3 em when,.9116 <S> The promise,.8949 Party like,.8893 Were up.7696 combined 4 em when theyre,.9089 wants to give,.8836 <S> Party like,.8893 <S> Were up,.7697 combined 5 you okay,.8868 La La La,.8836 This is why,.8756 imma be,

Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University

Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University danny1@stanford.edu 1. Motivation and Goal Music has long been a way for people to express their emotions. And because we all have a

More information

Combination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections

Combination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections 1/23 Combination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections Rudolf Mayer, Andreas Rauber Vienna University of Technology {mayer,rauber}@ifs.tuwien.ac.at Robert Neumayer

More information

Sarcasm Detection in Text: Design Document

Sarcasm Detection in Text: Design Document CSC 59866 Senior Design Project Specification Professor Jie Wei Wednesday, November 23, 2016 Sarcasm Detection in Text: Design Document Jesse Feinman, James Kasakyan, Jeff Stolzenberg 1 Table of contents

More information

Lyrics Classification using Naive Bayes

Lyrics Classification using Naive Bayes Lyrics Classification using Naive Bayes Dalibor Bužić *, Jasminka Dobša ** * College for Information Technologies, Klaićeva 7, Zagreb, Croatia ** Faculty of Organization and Informatics, Pavlinska 2, Varaždin,

More information

arxiv: v1 [cs.ir] 16 Jan 2019

arxiv: v1 [cs.ir] 16 Jan 2019 It s Only Words And Words Are All I Have Manash Pratim Barman 1, Kavish Dahekar 2, Abhinav Anshuman 3, and Amit Awekar 4 1 Indian Institute of Information Technology, Guwahati 2 SAP Labs, Bengaluru 3 Dell

More information

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? NICHOLAS BORG AND GEORGE HOKKANEN Abstract. The possibility of a hit song prediction algorithm is both academically interesting and industry motivated.

More information

Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset

Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset Ricardo Malheiro, Renato Panda, Paulo Gomes, Rui Paiva CISUC Centre for Informatics and Systems of the University of Coimbra {rsmal,

More information

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models Aric Bartle (abartle@stanford.edu) December 14, 2012 1 Background The field of composer recognition has

More information

Detecting Musical Key with Supervised Learning

Detecting Musical Key with Supervised Learning Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different

More information

Using Genre Classification to Make Content-based Music Recommendations

Using Genre Classification to Make Content-based Music Recommendations Using Genre Classification to Make Content-based Music Recommendations Robbie Jones (rmjones@stanford.edu) and Karen Lu (karenlu@stanford.edu) CS 221, Autumn 2016 Stanford University I. Introduction Our

More information

ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists

ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists ALF-200k: Towards Extensive Multimodal Analyses of Music Tracks and Playlists Eva Zangerle, Michael Tschuggnall, Stefan Wurzinger, Günther Specht Department of Computer Science Universität Innsbruck firstname.lastname@uibk.ac.at

More information

Finding Sarcasm in Reddit Postings: A Deep Learning Approach

Finding Sarcasm in Reddit Postings: A Deep Learning Approach Finding Sarcasm in Reddit Postings: A Deep Learning Approach Nick Guo, Ruchir Shah {nickguo, ruchirfs}@stanford.edu Abstract We use the recently published Self-Annotated Reddit Corpus (SARC) with a recurrent

More information

WHEN LYRICS OUTPERFORM AUDIO FOR MUSIC MOOD CLASSIFICATION: A FEATURE ANALYSIS

WHEN LYRICS OUTPERFORM AUDIO FOR MUSIC MOOD CLASSIFICATION: A FEATURE ANALYSIS WHEN LYRICS OUTPERFORM AUDIO FOR MUSIC MOOD CLASSIFICATION: A FEATURE ANALYSIS Xiao Hu J. Stephen Downie Graduate School of Library and Information Science University of Illinois at Urbana-Champaign xiaohu@illinois.edu

More information

Music Composition with RNN

Music Composition with RNN Music Composition with RNN Jason Wang Department of Statistics Stanford University zwang01@stanford.edu Abstract Music composition is an interesting problem that tests the creativity capacities of artificial

More information

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski Music Mood Classification - an SVM based approach Sebastian Napiorkowski Topics on Computer Music (Seminar Report) HPAC - RWTH - SS2015 Contents 1. Motivation 2. Quantification and Definition of Mood 3.

More information

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

Music Emotion Recognition. Jaesung Lee. Chung-Ang University Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or

More information

Composer Style Attribution

Composer Style Attribution Composer Style Attribution Jacqueline Speiser, Vishesh Gupta Introduction Josquin des Prez (1450 1521) is one of the most famous composers of the Renaissance. Despite his fame, there exists a significant

More information

Music Genre Classification and Variance Comparison on Number of Genres

Music Genre Classification and Variance Comparison on Number of Genres Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques

More information

The Million Song Dataset

The Million Song Dataset The Million Song Dataset AUDIO FEATURES The Million Song Dataset There is no data like more data Bob Mercer of IBM (1985). T. Bertin-Mahieux, D.P.W. Ellis, B. Whitman, P. Lamere, The Million Song Dataset,

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk

More information

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Kate Park, Annie Hu, Natalie Muenster Email: katepark@stanford.edu, anniehu@stanford.edu, ncm000@stanford.edu Abstract We propose

More information

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues

Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues Kate Park katepark@stanford.edu Annie Hu anniehu@stanford.edu Natalie Muenster ncm000@stanford.edu Abstract We propose detecting

More information

Lyric-Based Music Mood Recognition

Lyric-Based Music Mood Recognition Lyric-Based Music Mood Recognition Emil Ian V. Ascalon, Rafael Cabredo De La Salle University Manila, Philippines emil.ascalon@yahoo.com, rafael.cabredo@dlsu.edu.ph Abstract: In psychology, emotion is

More information

Feature-Based Analysis of Haydn String Quartets

Feature-Based Analysis of Haydn String Quartets Feature-Based Analysis of Haydn String Quartets Lawson Wong 5/5/2 Introduction When listening to multi-movement works, amateur listeners have almost certainly asked the following situation : Am I still

More information

Some Experiments in Humour Recognition Using the Italian Wikiquote Collection

Some Experiments in Humour Recognition Using the Italian Wikiquote Collection Some Experiments in Humour Recognition Using the Italian Wikiquote Collection Davide Buscaldi and Paolo Rosso Dpto. de Sistemas Informáticos y Computación (DSIC), Universidad Politécnica de Valencia, Spain

More information

DISCOURSE ANALYSIS OF LYRIC AND LYRIC-BASED CLASSIFICATION OF MUSIC

DISCOURSE ANALYSIS OF LYRIC AND LYRIC-BASED CLASSIFICATION OF MUSIC DISCOURSE ANALYSIS OF LYRIC AND LYRIC-BASED CLASSIFICATION OF MUSIC Jiakun Fang 1 David Grunberg 1 Diane Litman 2 Ye Wang 1 1 School of Computing, National University of Singapore, Singapore 2 Department

More information

Analysis and Clustering of Musical Compositions using Melody-based Features

Analysis and Clustering of Musical Compositions using Melody-based Features Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates

More information

Multi-modal Analysis of Music: A large-scale Evaluation

Multi-modal Analysis of Music: A large-scale Evaluation Multi-modal Analysis of Music: A large-scale Evaluation Rudolf Mayer Institute of Software Technology and Interactive Systems Vienna University of Technology Vienna, Austria mayer@ifs.tuwien.ac.at Robert

More information

Automatic Analysis of Musical Lyrics

Automatic Analysis of Musical Lyrics Merrimack College Merrimack ScholarWorks Honors Senior Capstone Projects Honors Program Spring 2018 Automatic Analysis of Musical Lyrics Joanna Gormley Merrimack College, gormleyjo@merrimack.edu Follow

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

Automatic Music Genre Classification

Automatic Music Genre Classification Automatic Music Genre Classification Nathan YongHoon Kwon, SUNY Binghamton Ingrid Tchakoua, Jackson State University Matthew Pietrosanu, University of Alberta Freya Fu, Colorado State University Yue Wang,

More information

The Lowest Form of Wit: Identifying Sarcasm in Social Media

The Lowest Form of Wit: Identifying Sarcasm in Social Media 1 The Lowest Form of Wit: Identifying Sarcasm in Social Media Saachi Jain, Vivian Hsu Abstract Sarcasm detection is an important problem in text classification and has many applications in areas such as

More information

Introduction to Natural Language Processing This week & next week: Classification Sentiment Lexicons

Introduction to Natural Language Processing This week & next week: Classification Sentiment Lexicons Introduction to Natural Language Processing This week & next week: Classification Sentiment Lexicons Center for Games and Playable Media http://games.soe.ucsc.edu Kendall review of HW 2 Next two weeks

More information

Multimodal Music Mood Classification Framework for Christian Kokborok Music

Multimodal Music Mood Classification Framework for Christian Kokborok Music Journal of Engineering Technology (ISSN. 0747-9964) Volume 8, Issue 1, Jan. 2019, PP.506-515 Multimodal Music Mood Classification Framework for Christian Kokborok Music Sanchali Das 1*, Sambit Satpathy

More information

Melody classification using patterns

Melody classification using patterns Melody classification using patterns Darrell Conklin Department of Computing City University London United Kingdom conklin@city.ac.uk Abstract. A new method for symbolic music classification is proposed,

More information

Headings: Machine Learning. Text Mining. Music Emotion Recognition

Headings: Machine Learning. Text Mining. Music Emotion Recognition Yunhui Fan. Music Mood Classification Based on Lyrics and Audio Tracks. A Master s Paper for the M.S. in I.S degree. April, 2017. 36 pages. Advisor: Jaime Arguello Music mood classification has always

More information

Singer Traits Identification using Deep Neural Network

Singer Traits Identification using Deep Neural Network Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic

More information

Lyric-based Sentiment Polarity Classification of Thai Songs

Lyric-based Sentiment Polarity Classification of Thai Songs Lyric-based Sentiment Polarity Classification of Thai Songs Chutimet Srinilta, Wisuwat Sunhem, Suchat Tungjitnob, Saruta Thasanthiah, and Supawit Vatathanavaro Abstract Song sentiment polarity provides

More information

Basic Natural Language Processing

Basic Natural Language Processing Basic Natural Language Processing Why NLP? Understanding Intent Search Engines Question Answering Azure QnA, Bots, Watson Digital Assistants Cortana, Siri, Alexa Translation Systems Azure Language Translation,

More information

Toward Multi-Modal Music Emotion Classification

Toward Multi-Modal Music Emotion Classification Toward Multi-Modal Music Emotion Classification Yi-Hsuan Yang 1, Yu-Ching Lin 1, Heng-Tze Cheng 1, I-Bin Liao 2, Yeh-Chin Ho 2, and Homer H. Chen 1 1 National Taiwan University 2 Telecommunication Laboratories,

More information

A Dominant Gene Genetic Algorithm for a Substitution Cipher in Cryptography

A Dominant Gene Genetic Algorithm for a Substitution Cipher in Cryptography A Dominant Gene Genetic Algorithm for a Substitution Cipher in Cryptography Derrick Erickson and Michael Hausman University of Colorado at Colorado Springs CS 591 Substitution Cipher 1. Remove all but

More information

Characterizing Literature Using Machine Learning Methods

Characterizing Literature Using Machine Learning Methods Masterarbeit Characterizing Literature Using Machine Learning Methods vorgelegt von Jan Bílek Fakultät für Mathematik, Informatik und Naturwissenschaften Fachbereich Informatik Arbeitsbereich Wissenschaftliches

More information

MUSI-6201 Computational Music Analysis

MUSI-6201 Computational Music Analysis MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017

Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017 Noise (Music) Composition Using Classification Algorithms Peter Wang (pwang01) December 15, 2017 Background Abstract I attempted a solution at using machine learning to compose music given a large corpus

More information

Combination of Audio and Lyrics Features for Genre Classification in Digital Audio Collections

Combination of Audio and Lyrics Features for Genre Classification in Digital Audio Collections Combination of Audio and Lyrics Features for Genre Classification in Digital Audio Collections Rudolf Mayer 1, Robert Neumayer 1,2, and Andreas Rauber 1 ABSTRACT 1 Department of Software Technology and

More information

Release Year Prediction for Songs

Release Year Prediction for Songs Release Year Prediction for Songs [CSE 258 Assignment 2] Ruyu Tan University of California San Diego PID: A53099216 rut003@ucsd.edu Jiaying Liu University of California San Diego PID: A53107720 jil672@ucsd.edu

More information

The Sixteen Machine: Generating Intelligent Rap Lyrics Aaron Bracket, Antonio Tan-Torres

The Sixteen Machine: Generating Intelligent Rap Lyrics Aaron Bracket, Antonio Tan-Torres CS 221 Final report The Sixteen Machine: Generating Intelligent Rap Lyrics Aaron Bracket, Antonio Tan-Torres Introduction: The use of language is a fascinating aspect of humans. Specifically, how language

More information

STRING QUARTET CLASSIFICATION WITH MONOPHONIC MODELS

STRING QUARTET CLASSIFICATION WITH MONOPHONIC MODELS STRING QUARTET CLASSIFICATION WITH MONOPHONIC Ruben Hillewaere and Bernard Manderick Computational Modeling Lab Department of Computing Vrije Universiteit Brussel Brussels, Belgium {rhillewa,bmanderi}@vub.ac.be

More information

Emotionally-Relevant Features for Classification and Regression of Music Lyrics

Emotionally-Relevant Features for Classification and Regression of Music Lyrics IEEE TRANSACTIONS ON JOURNAL AFFECTIVE COMPUTING, MANUSCRIPT ID 1 Emotionally-Relevant Features for Classification and Regression of Music Lyrics Ricardo Malheiro, Renato Panda, Paulo Gomes and Rui Pedro

More information

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016 6.UAP Project FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System Daryl Neubieser May 12, 2016 Abstract: This paper describes my implementation of a variable-speed accompaniment system that

More information

Jazz Melody Generation and Recognition

Jazz Melody Generation and Recognition Jazz Melody Generation and Recognition Joseph Victor December 14, 2012 Introduction In this project, we attempt to use machine learning methods to study jazz solos. The reason we study jazz in particular

More information

MINING THE CORRELATION BETWEEN LYRICAL AND AUDIO FEATURES AND THE EMERGENCE OF MOOD

MINING THE CORRELATION BETWEEN LYRICAL AND AUDIO FEATURES AND THE EMERGENCE OF MOOD AROUSAL 12th International Society for Music Information Retrieval Conference (ISMIR 2011) MINING THE CORRELATION BETWEEN LYRICAL AND AUDIO FEATURES AND THE EMERGENCE OF MOOD Matt McVicar Intelligent Systems

More information

MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC

MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC 12th International Society for Music Information Retrieval Conference (ISMIR 2011) MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC Sam Davies, Penelope Allen, Mark

More information

Music Genre Classification

Music Genre Classification Music Genre Classification chunya25 Fall 2017 1 Introduction A genre is defined as a category of artistic composition, characterized by similarities in form, style, or subject matter. [1] Some researchers

More information

Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs

Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs Braja Gopal Patra, Dipankar Das, and Sivaji Bandyopadhyay Department of Computer Science and Engineering, Jadavpur

More information

Modeling memory for melodies

Modeling memory for melodies Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University

More information

Neural Network Predicating Movie Box Office Performance

Neural Network Predicating Movie Box Office Performance Neural Network Predicating Movie Box Office Performance Alex Larson ECE 539 Fall 2013 Abstract The movie industry is a large part of modern day culture. With the rise of websites like Netflix, where people

More information

Automatic Labelling of tabla signals

Automatic Labelling of tabla signals ISMIR 2003 Oct. 27th 30th 2003 Baltimore (USA) Automatic Labelling of tabla signals Olivier K. GILLET, Gaël RICHARD Introduction Exponential growth of available digital information need for Indexing and

More information

Supervised Learning in Genre Classification

Supervised Learning in Genre Classification Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music

More information

KLUEnicorn at SemEval-2018 Task 3: A Naïve Approach to Irony Detection

KLUEnicorn at SemEval-2018 Task 3: A Naïve Approach to Irony Detection KLUEnicorn at SemEval-2018 Task 3: A Naïve Approach to Irony Detection Luise Dürlich Friedrich-Alexander Universität Erlangen-Nürnberg / Germany luise.duerlich@fau.de Abstract This paper describes the

More information

Rapping Manual Table of Contents

Rapping Manual Table of Contents Rapping Manual Table of Contents 1. Count Music/Bars 14. Draft vs Freelance 2. Rhyme Schemes 15. Song Structure 3. Sound Schemes 16. Rap Chorus 4. Fast Rapping 17. The Sacred Process 5. Compound Rhymes

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Week 14 Music Understanding and Classification

Week 14 Music Understanding and Classification Week 14 Music Understanding and Classification Roger B. Dannenberg Professor of Computer Science, Music & Art Overview n Music Style Classification n What s a classifier? n Naïve Bayesian Classifiers n

More information

A Fast Alignment Scheme for Automatic OCR Evaluation of Books

A Fast Alignment Scheme for Automatic OCR Evaluation of Books A Fast Alignment Scheme for Automatic OCR Evaluation of Books Ismet Zeki Yalniz, R. Manmatha Multimedia Indexing and Retrieval Group Dept. of Computer Science, University of Massachusetts Amherst, MA,

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

A Large Scale Experiment for Mood-Based Classification of TV Programmes

A Large Scale Experiment for Mood-Based Classification of TV Programmes 2012 IEEE International Conference on Multimedia and Expo A Large Scale Experiment for Mood-Based Classification of TV Programmes Jana Eggink BBC R&D 56 Wood Lane London, W12 7SB, UK jana.eggink@bbc.co.uk

More information

What is Statistics? 13.1 What is Statistics? Statistics

What is Statistics? 13.1 What is Statistics? Statistics 13.1 What is Statistics? What is Statistics? The collection of all outcomes, responses, measurements, or counts that are of interest. A portion or subset of the population. Statistics Is the science of

More information

in the Howard County Public School System and Rocketship Education

in the Howard County Public School System and Rocketship Education Technical Appendix May 2016 DREAMBOX LEARNING ACHIEVEMENT GROWTH in the Howard County Public School System and Rocketship Education Abstract In this technical appendix, we present analyses of the relationship

More information

EE373B Project Report Can we predict general public s response by studying published sales data? A Statistical and adaptive approach

EE373B Project Report Can we predict general public s response by studying published sales data? A Statistical and adaptive approach EE373B Project Report Can we predict general public s response by studying published sales data? A Statistical and adaptive approach Song Hui Chon Stanford University Everyone has different musical taste,

More information

MindMouse. This project is written in C++ and uses the following Libraries: LibSvm, kissfft, BOOST File System, and Emotiv Research Edition SDK.

MindMouse. This project is written in C++ and uses the following Libraries: LibSvm, kissfft, BOOST File System, and Emotiv Research Edition SDK. Andrew Robbins MindMouse Project Description: MindMouse is an application that interfaces the user s mind with the computer s mouse functionality. The hardware that is required for MindMouse is the Emotiv

More information

Musical Hit Detection

Musical Hit Detection Musical Hit Detection CS 229 Project Milestone Report Eleanor Crane Sarah Houts Kiran Murthy December 12, 2008 1 Problem Statement Musical visualizers are programs that process audio input in order to

More information

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular Music Mood Sheng Xu, Albert Peyton, Ryan Bhular What is Music Mood A psychological & musical topic Human emotions conveyed in music can be comprehended from two aspects: Lyrics Music Factors that affect

More information

Leopold-Franzens-University Innsbruck. Institute of Computer Science Databases and Information Systems. Stefan Wurzinger, BSc

Leopold-Franzens-University Innsbruck. Institute of Computer Science Databases and Information Systems. Stefan Wurzinger, BSc Leopold-Franzens-University Innsbruck Institute of Computer Science Databases and Information Systems Analyzing the Characteristics of Music Playlists using Song Lyrics and Content-based Features Master

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox 1803707 knoxm@eecs.berkeley.edu December 1, 006 Abstract We built a system to automatically detect laughter from acoustic features of audio. To implement the system,

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

UC San Diego UC San Diego Previously Published Works

UC San Diego UC San Diego Previously Published Works UC San Diego UC San Diego Previously Published Works Title Classification of MPEG-2 Transport Stream Packet Loss Visibility Permalink https://escholarship.org/uc/item/9wk791h Authors Shin, J Cosman, P

More information

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION Halfdan Rump, Shigeki Miyabe, Emiru Tsunoo, Nobukata Ono, Shigeki Sagama The University of Tokyo, Graduate

More information

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca

More information

MELODY ANALYSIS FOR PREDICTION OF THE EMOTIONS CONVEYED BY SINHALA SONGS

MELODY ANALYSIS FOR PREDICTION OF THE EMOTIONS CONVEYED BY SINHALA SONGS MELODY ANALYSIS FOR PREDICTION OF THE EMOTIONS CONVEYED BY SINHALA SONGS M.G.W. Lakshitha, K.L. Jayaratne University of Colombo School of Computing, Sri Lanka. ABSTRACT: This paper describes our attempt

More information

Mood Classification Using Lyrics and Audio: A Case-Study in Greek Music

Mood Classification Using Lyrics and Audio: A Case-Study in Greek Music Mood Classification Using Lyrics and Audio: A Case-Study in Greek Music Spyros Brilis, Evagelia Gkatzou, Antonis Koursoumis, Karolos Talvis, Katia Kermanidis, Ioannis Karydis To cite this version: Spyros

More information

Multi-modal Analysis of Music: A large-scale Evaluation

Multi-modal Analysis of Music: A large-scale Evaluation Multi-modal Analysis of Music: A large-scale Evaluation Rudolf Mayer Institute of Software Technology and Interactive Systems Vienna University of Technology Vienna, Austria mayer@ifs.tuwien.ac.at Robert

More information

Research & Development. White Paper WHP 232. A Large Scale Experiment for Mood-based Classification of TV Programmes BRITISH BROADCASTING CORPORATION

Research & Development. White Paper WHP 232. A Large Scale Experiment for Mood-based Classification of TV Programmes BRITISH BROADCASTING CORPORATION Research & Development White Paper WHP 232 September 2012 A Large Scale Experiment for Mood-based Classification of TV Programmes Jana Eggink, Denise Bland BRITISH BROADCASTING CORPORATION White Paper

More information

Joint Image and Text Representation for Aesthetics Analysis

Joint Image and Text Representation for Aesthetics Analysis Joint Image and Text Representation for Aesthetics Analysis Ye Zhou 1, Xin Lu 2, Junping Zhang 1, James Z. Wang 3 1 Fudan University, China 2 Adobe Systems Inc., USA 3 The Pennsylvania State University,

More information

Evaluation of Serial Periodic, Multi-Variable Data Visualizations

Evaluation of Serial Periodic, Multi-Variable Data Visualizations Evaluation of Serial Periodic, Multi-Variable Data Visualizations Alexander Mosolov 13705 Valley Oak Circle Rockville, MD 20850 (301) 340-0613 AVMosolov@aol.com Benjamin B. Bederson i Computer Science

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

Chord Classification of an Audio Signal using Artificial Neural Network

Chord Classification of an Audio Signal using Artificial Neural Network Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American

More information

A Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models

A Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models A Study on Cross-cultural and Cross-dataset Generalizability of Music Mood Regression Models Xiao Hu University of Hong Kong xiaoxhu@hku.hk Yi-Hsuan Yang Academia Sinica yang@citi.sinica.edu.tw ABSTRACT

More information

EVALUATING THE GENRE CLASSIFICATION PERFORMANCE OF LYRICAL FEATURES RELATIVE TO AUDIO, SYMBOLIC AND CULTURAL FEATURES

EVALUATING THE GENRE CLASSIFICATION PERFORMANCE OF LYRICAL FEATURES RELATIVE TO AUDIO, SYMBOLIC AND CULTURAL FEATURES EVALUATING THE GENRE CLASSIFICATION PERFORMANCE OF LYRICAL FEATURES RELATIVE TO AUDIO, SYMBOLIC AND CULTURAL FEATURES Cory McKay, John Ashley Burgoyne, Jason Hockman, Jordan B. L. Smith, Gabriel Vigliensoni

More information

An Impact Analysis of Features in a Classification Approach to Irony Detection in Product Reviews

An Impact Analysis of Features in a Classification Approach to Irony Detection in Product Reviews Universität Bielefeld June 27, 2014 An Impact Analysis of Features in a Classification Approach to Irony Detection in Product Reviews Konstantin Buschmeier, Philipp Cimiano, Roman Klinger Semantic Computing

More information

Topics in Computer Music Instrument Identification. Ioanna Karydi

Topics in Computer Music Instrument Identification. Ioanna Karydi Topics in Computer Music Instrument Identification Ioanna Karydi Presentation overview What is instrument identification? Sound attributes & Timbre Human performance The ideal algorithm Selected approaches

More information

Neural Network for Music Instrument Identi cation

Neural Network for Music Instrument Identi cation Neural Network for Music Instrument Identi cation Zhiwen Zhang(MSE), Hanze Tu(CCRMA), Yuan Li(CCRMA) SUN ID: zhiwen, hanze, yuanli92 Abstract - In the context of music, instrument identi cation would contribute

More information

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin AutoChorale An Automatic Music Generator Jack Mi, Zhengtao Jin 1 Introduction Music is a fascinating form of human expression based on a complex system. Being able to automatically compose music that both

More information

Cryptanalysis of LILI-128

Cryptanalysis of LILI-128 Cryptanalysis of LILI-128 Steve Babbage Vodafone Ltd, Newbury, UK 22 nd January 2001 Abstract: LILI-128 is a stream cipher that was submitted to NESSIE. Strangely, the designers do not really seem to have

More information

SALES DATA REPORT

SALES DATA REPORT SALES DATA REPORT 2013-16 EXECUTIVE SUMMARY AND HEADLINES PUBLISHED NOVEMBER 2017 ANALYSIS AND COMMENTARY BY Contents INTRODUCTION 3 Introduction by Fiona Allan 4 Introduction by David Brownlee 5 HEADLINES

More information