Music Recommendation from Song Sets

Similar documents
Toward Evaluation Techniques for Music Similarity

Subjective Similarity of Music: Data Collection for Individuality Analysis

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

ISMIR 2008 Session 2a Music Recommendation and Organization

Supervised Learning in Genre Classification

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

A New Method for Calculating Music Similarity

The song remains the same: identifying versions of the same piece using tonal descriptors

EE373B Project Report Can we predict general public s response by studying published sales data? A Statistical and adaptive approach

A Language Modeling Approach for the Classification of Audio Music

An ecological approach to multimodal subjective music similarity perception

MUSI-6201 Computational Music Analysis

Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University

Enhancing Music Maps

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

Classification of Timbre Similarity

Automatic Rhythmic Notation from Single Voice Audio Sources

EVALUATION OF FEATURE EXTRACTORS AND PSYCHO-ACOUSTIC TRANSFORMATIONS FOR MUSIC GENRE CLASSIFICATION

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

Automatic Music Similarity Assessment and Recommendation. A Thesis. Submitted to the Faculty. Drexel University. Donald Shaul Williamson

COMBINING FEATURES REDUCES HUBNESS IN AUDIO SIMILARITY

SONG-LEVEL FEATURES AND SUPPORT VECTOR MACHINES FOR MUSIC CLASSIFICATION

A Large-Scale Evaluation of Acoustic and Subjective Music- Similarity Measures

Research Article A Model-Based Approach to Constructing Music Similarity Functions

Music Genre Classification

HIDDEN MARKOV MODELS FOR SPECTRAL SIMILARITY OF SONGS. Arthur Flexer, Elias Pampalk, Gerhard Widmer

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

A Music Retrieval System Using Melody and Lyric

MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface

A TEXT RETRIEVAL APPROACH TO CONTENT-BASED AUDIO RETRIEVAL

Effects of acoustic degradations on cover song recognition

A MUSIC CLASSIFICATION METHOD BASED ON TIMBRAL FEATURES

USING ARTIST SIMILARITY TO PROPAGATE SEMANTIC INFORMATION

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

GRADIENT-BASED MUSICAL FEATURE EXTRACTION BASED ON SCALE-INVARIANT FEATURE TRANSFORM

Automatic Laughter Detection

Creating a Feature Vector to Identify Similarity between MIDI Files

Music Genre Classification and Variance Comparison on Number of Genres

Features for Audio and Music Classification

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

STRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY

Improving Frame Based Automatic Laughter Detection

Chord Classification of an Audio Signal using Artificial Neural Network

Normalized Cumulative Spectral Distribution in Music

A Categorical Approach for Recognizing Emotional Effects of Music

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Combination of Audio & Lyrics Features for Genre Classication in Digital Audio Collections

Release Year Prediction for Songs

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Singer Traits Identification using Deep Neural Network

MODELS of music begin with a representation of the

Automatic Music Clustering using Audio Attributes

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

SIGNAL + CONTEXT = BETTER CLASSIFICATION

What Sounds So Good? Maybe, Time Will Tell.

Unifying Low-level and High-level Music. Similarity Measures

Assigning and Visualizing Music Genres by Web-based Co-Occurrence Analysis

CS229 Project Report Polyphonic Piano Transcription

AudioRadar. A metaphorical visualization for the navigation of large music collections

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski

Clustering Streaming Music via the Temporal Similarity of Timbre

th International Conference on Information Visualisation

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION

Music Information Retrieval Community

638 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010

Speech and Speaker Recognition for the Command of an Industrial Robot

Automatic Laughter Detection

PULSE-DEPENDENT ANALYSES OF PERCUSSIVE MUSIC

On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices

Aalborg Universitet. Feature Extraction for Music Information Retrieval Jensen, Jesper Højvang. Publication date: 2009

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Outline. Why do we classify? Audio Classification

Unisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web

Perceptual dimensions of short audio clips and corresponding timbre features

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio

OVER the past few years, electronic music distribution

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. X, NO. X, MONTH Unifying Low-level and High-level Music Similarity Measures

Extracting Information from Music Audio

Music Segmentation Using Markov Chain Methods

THE importance of music content analysis for musical

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES

LEARNING AND CLEAN-UP IN A LARGE SCALE MUSIC DATABASE

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

Unisoner: An Interactive Interface for Derivative Chorus Creation from Various Singing Voices on the Web

TOWARDS CHARACTERISATION OF MUSIC VIA RHYTHMIC PATTERNS

Analytic Comparison of Audio Feature Sets using Self-Organising Maps

Statistical Modeling and Retrieval of Polyphonic Music

Polyphonic Audio Matching for Score Following and Intelligent Audio Editors

Social Audio Features for Advanced Music Retrieval Interfaces

HUMANS have a remarkable ability to recognize objects

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

ON INTER-RATER AGREEMENT IN AUDIO MUSIC SIMILARITY

An Examination of Foote s Self-Similarity Method

Instrument Timbre Transformation using Gaussian Mixture Models

Repeating Pattern Extraction Technique(REPET);A method for music/voice separation.

1.1 What is CiteScore? Why don t you include articles-in-press in CiteScore? Why don t you include abstracts in CiteScore?

Audio Feature Extraction for Corpus Analysis

MPEG-7 AUDIO SPECTRUM BASIS AS A SIGNATURE OF VIOLIN SOUND

Transcription:

Music Recommendation from Song Sets Beth Logan Cambridge Research Laboratory HP Laboratories Cambridge HPL-2004-148 August 30, 2004* E-mail: Beth.Logan@hp.com music analysis, information retrieval, multimedia indexing We motivate the proble m of music recommendation based solely on acoustics from groups of related songs or 'song sets'. We propose four solutions which can be used with any acoustic-based similarity measure. The first builds a model for each song set and recommends new songs according to their distance from this model. The next three approaches recommend songs according to the average, median and minimum distance to songs in the song set. For a similarity measure based on K - means models of MFCC features, experiments on a database of 18647 songs indicated that the minimum distance technique is the most effective, returning a valid recommendation as one of the top 5 32.5% of the time. The approach based on the median distance was the next best, returning a valid recommendation as one of the top 5 29.5% of the time. * Internal Accession Date Only To be published in and presented at the International Conference on Music Information Retrieval, 10-14 October 2004, Barcelona, Spain Approved for External Publication Copyright ISMIR 2004

1 Introduction Listeners are increasingly finding music of interest on the Web rather than through traditional distribution channels. This represents a great opportunity for new and obscure artists to introduce their music to large audiences since the Web has relatively low entry barriers. However, it is difficult for listeners to discover such artists since established automatic music recommendation techniques use either opinions or playlists generated by the public, or meta-data generated by experts. For little-known artists, few experts are interested in categorizing their music and the general public is unaware of their existence. Artists could self-categorize their music but such a system is open to abuse. What is needed then is a way to recommend songs or artists based solely on audio data. Automatically recommending and organizing music using audio properties has attracted much attention (e.g. see [1], [2] and references). However, even the best systems to date still fall far short of human expectations [1]. The inclusion of non-audio meta-data can help overcome such shortfalls, yet for new artists such meta-data does not exist. In such cases though, we can perhaps achieve better performance by including more audio data. We propose then rather than studying recommending N songs given one example song to instead study the easier but still very useful task of recommending one song given N related songs. The hope is that if several songs are chosen as representative of the sound the user is seeking, we will have more information on which to base our automatic recommendation. We call this problem the song set completion problem. We use the term song set rather than playlist as we are not concerned with the order in which the songs will be played, merely that together they represent a sub-genre preferred by the user. Thus we consider how given a set of user-selected songs we would recommend another song with similar properties using merely audio analysis. Such sets of songs might be a user s favorite songs or a group of songs by the user s favorite artist. In this paper we present and evaluate four algorithms to recommend songs from song sets. The algorithms are quite general and can be used with any audio distance measure. We test them using our previously published timbre similarity measure. 2 Recommendations from Song Sets In this section, we first briefly describe our previously presented technique to determine acoustic similarity between songs. We then present four algorithms which can be regarded as extensions of this or any song similarity technique to determine the distance between songs and song sets. The approaches differ by whether they build a single model for the entire song set or a series of models for its constituent songs, and by the manner of comparing the model or models to the songs to be recommended. 2.1 Acoustic-Based Music Similarity In order to provide recommendations from song sets, we require a means to automatically determine the acoustic distance between a song and a song set. This is similar to the task of determining the distance between two songs for which many algorithms have been proposed. 1

We have previously published and achieved good results with an acoustic similarity measure which captures information about songs instrumentation or timbre [3]. The approach is similar in spirit to a number of other music similarity algorithms which transform raw audio to perceptually meaningful features and fit a parametric probability model to these. Similarity is then computed using a suitable distance measure between the models for each song. In our previous work, each song is first converted to a group of Mel-frequency cepstral coefficients (MFCCs). Such features capture smoothed spectral information which roughly corresponds to instrumentation and timbre. We then model these features using K-means clustering, learning the mean, covariance and weight of each cluster. Having fit models to the data, we calculate similarity by comparing the models. For this, we use the Earth-Mover s distance (EMD) [4] which calculates the cost of moving probability mass between clusters to make them equivalent. For more details refer to [3]. 2.2 Modeling Song Sets Directly Our first technique for recommending songs from song sets builds a single model to represent all the songs in the set and recommends similar songs according to their distance to this model. This is equivalent to treating the song set as one long song. In this paper, we use the models and distance measure from our previously proposed technique described above. However, any model-based acoustic similarity measure could be used. 2.3 Average Distance to the Songs in the Set The approach described above compares pairs of models trained on quantities of data that could differ by an order of magnitude. Since this may be undesirable, we present an alternative approach. Instead of building one model for the song set, we build a separate model for each of its songs and then recommend songs according to their average distance to a song in the song set. This technique is more scalable than the previous approach; if we form a new song set from a different combination of songs, we need not train a new model. 2.4 Median Distance to the Songs in the Set The two techniques described above average the distance between a song and a song set either explicitly or by merging the contents of the song set into one song. However, if one or two songs in the set are outliers or unusual, this will affect the average, probably adversely 1. This is equivalent to saying that if the distribution of distances between a song and a song set is not Gaussian, then taking the average distance will be very sensitive to outliers. Figures 1, 2 and 3 show the histograms for the distance between a randomly selected song and the rest of the songs on three albums. As described in Section 3, we regard albums as good examples of song sets. We see from these figures that typically, the distribution of the distance between a song and the songs in the song set is not Gaussian. We have examined 1 At least for the simple distance measure studied in this paper. One can imagine a very sophisticated recommendation technique which takes note of an unusual song and decides whether it should influence a recommendation. 2

7 6 5 number 4 3 2 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 distance Figure 1: Histogram of the distances between a randomly chosen song from 20 Years of Jethro Tull and the rest of the songs on the album. such histograms for over 500 albums and found that very few are even close to being Gaussian. We therefore seek a distance measure between songs and song sets that does not rely on the distribution of the distances between songs being Gaussian. A standard technique from statistics used to improve robustness to outliers when the data is non- Gaussian is to take the median instead of the average. We therefore consider recommending songs using the median of the distances between the song and each song in the song set. This approach shares the scalability advantages of the previous averaging technique but makes less assumptions about the nature of the distance distribution. 2.5 Minimum Distance to the Songs in the Set Finally, we consider computing the distance between a song and a song set as the minimum of the distances between the song and the songs in the set. Although this technique could backfire if the song matches an outlier in the song set, on average it should have good performance. 3 Experiments Having presented a range of techniques to provide recommendations from songs sets, we now study their performance on a database of 18647 songs. 3.1 Experimental Setup A natural source of song sets is user-generated playlists which can be easily found on the Web. However, our analysis requires data for which audio is available at the song level since we extract features from the audio of each song. Collecting audio for all the songs in even a subset of the playlists on the Web is unfortunately beyond our resources. 3

2 1.5 number 1 0.5 0 0.08 0.1 0.12 0.14 0.16 0.18 0.2 distance Figure 2: Histogram of the distances between a randomly chosen song from Jagged Little Pill by Alanis Morissette and the rest of the songs on the album. 2 1.5 number 1 0.5 0 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 distance Figure 3: Histogram of the distances between a randomly chosen song from Backstreet Boys and the rest of the songs on the album. 4

Genre % Collection Rock 68.2 Classical 5.6 Jazz 5.5 World 3.7 Newage 2.4 Folk 2.4 Soundtrack 2.0 Electronica 1.9 Vocal 1.7 Rap 1.5 Table 1: Percentage of the collection covered by the main genres. Albums however are a source of natural song sets and are more readily available in those sets. We therefore evaluate our algorithms on an in-house database of 18647 songs from 1523 albums for which we have the full audio. The collection covers a wide variety of genres from Classical to Rock. Table 1 shows the percentage of the collection covered by the main genres. We assume that the list of songs on each album is a valid song set. For each album, we randomly choose one song to omit. These omitted songs form our test set and the remainder of the songs on each album a song set. There are thus 1523 test songs and 1523 song sets in each experiment. For each song set, we recommend songs from the test set according to our algorithms. Ideally, the song omitted from the each song set s album should be the first recommendation for that song set, although there could be cases in which other songs are valid choices. We report two figures of merit. The first records the percentage of times this omitted or correct song was in the top 1, the top 5, the top 10 and the top 20 recommendations. We also study a more relaxed definition of the correct song which includes all songs by the same artist who composed the songs in the song set. 3.2 Results We first consider recommendations of songs according to closeness to the models built for each song set as described in Section 2.2. We convert the audio to 19 dimensional MFCC vectors and cluster these using K-means clustering. Table 2 shows the percentage of times the correct song was in the top 1, top 5, top 10 and top 20 recommendations for varying numbers of clusters used to model the song set. Each test song is modeled by a K-means model with 16 clusters. We see that these results are very promising, being far better than chance. At least 25% of the time, the correct song is one of the top 5 recommendations. The best result is obtained for 64 clusters. For 256 clusters the performance degrades, presumably because insufficient data is available to learn so many clusters. If the definition of the correct song is relaxed we obtain the results in the lower half of Table 2. Here we see that an improvement of about 20% relative is possible if one assumes any song 5

Correct Number Top Top Top Top Song Clusters 1 5 10 20 Strict 16 13.7 24.7 31.4 37.5 64 16.8 27.9 33.5 38.9 256 15.6 26.8 33.5 39.5 Relaxed 16 16.2 29.9 38.2 46.8 64 20.3 33.7 41.2 48.1 256 19.5 33.2 40.5 47.7 Table 2: Percentage of times the correct song was in the top 1, 5, 10 and 20 songs returned according to song sets modeled by K-means models with various numbers of clusters for various definitions of the correct song. Each test song is modeled by a K-means model with 16 clusters. Distance Correct Top Top Top Top Song 1 5 10 20 Average Strict 15.8 28.1 34.1 41.2 Relaxed 18.4 33.4 41.2 50.2 Median Strict 17.4 29.5 35.0 42.7 Relaxed 20.9 35.0 41.6 51.3 Minimum Strict 20.1 32.5 37.6 45.1 Relaxed 26.5 41.2 47.7 56.1 Table 3: Percentage of times the correct song was in the top 1, 5, 10 and 20 songs returned according to the average, median and minimum distance between it and the songs in the song set for various definitions of the correct song. returned by the the same artist as the song set would be a suitable recommendation. We next consider song recommendations according to their average distance to a song in the song set as described in Section 2.3. We model each test song and each song in the song set by a K-means model with 16 clusters and average the EMD between the test song and each song in the song set. The top part of Table 3 shows the results for this experiment for both the strict and relaxed definitions of the correct song. The results are comparable to the previous case in which the song set was represented by a model, although as discussed averaging is more scalable so would be preferred. Next we study the system described in Section 2.4 in which songs are recommended according to their median distance to the songs in the song set. The middle section of Table 3 shows these results. We see that use of the median provides some advantage over using the average distance or modeling the song set directly. Even for the strictest definition of correct song, almost 30% the time, the correct song is returned as one of the top 5. Finally we study the system which recommends songs according to their minimum distance to all songs in the song set. These results are shown in the bottom section of Table 3. These indicate that this approach is the best. For the strictest definition of correct song, a suitable 6

recommendation is returned 32.5% of the time. For the more relaxed definition of correct song, the correct song is chosen in the top 5 41.2% of the time, compared with only 35.0% of the time for the median distance system. 4 Discussion The results are somewhat surprising. The best approach for recommending songs from song sets appears to be simply choosing songs according to the minimum distance to songs in the song set. There appears to be no advantage in modeling the song set or even considering any song in it other than the one closest to the test song. This could be an artifact of our choice of song set and our distance measure. Our song sets are albums which typically contain very closely related songs. Although there are outliers, we would be unlucky to choose one of these as our test song. Also, our distance measure works best when comparing two models trained on the same amount of data. Other distance measures designed to model the song set directly may be more effective. In any case, we should be wary of drawing too many conclusions from this preliminary study. We have only considered one set of test songs and two objective definitions of the correct song. More experiments on a variety of song sets with user evaluations are needed. 5 Conclusion and Future Work We have motivated and proposed solutions to the problem of music recommendation based solely on acoustics from sets of related songs. We found that for a timbre-based similarity measure, the best recommendations were obtained by ranking songs by the minimum of their distance to songs in the song set. Future work will focus on the use of other acoustic distance measures, particularly those incorporating rhythmic information, and learning which sounds in the song set perceptually distinguish it from the rest of audio space. We will also consider recommending groups of songs. As described, we hope to conduct this research on a larger, more varied collection of song sets with greater feedback from users. 6 Acknowledgments Thanks is due to Dave Goddeau for useful discussions and to the anonymous reviewers for their feedback. References [1] J-J Aucouturier and F. Pachet. Improving timbre similarity: How high s the sky. Journal of Negative Results in Speech and Audio Sciences, April 2004. 7

[2] A. Berenzweig, B. Logan, D.P.W. Ellis, and B. Whitman. A large-scale evaluation of acoustic and subjective music similarity measures. In Proceedings International Conference on Music Information Retrieval (ISMIR), 2003. [3] Beth Logan and Ariel Salomon. A music similarity function based on signal analysis. In ICME 2001, Tokyo, Japan, 2001. [4] Y. Rubner, C. Tomasi, and L. Guibas. The Earth Mover s Distance as a metric for image retrieval. Technical report, Stanford University, 1998. 8