Production. Old School. New School. Personal Studio. Professional Studio

Size: px
Start display at page:

Download "Production. Old School. New School. Personal Studio. Professional Studio"

Transcription

1 Old School Production Professional Studio New School Personal Studio 1

2 Old School Distribution New School Large Scale Physical Cumbersome Small Scale Virtual Portable 2

3 Old School Critics Promotion New School Social Networks Radio DJs Personalized Internet Radio 3

4 Age of Music Proliferation Producers Consumers 5M Artists 140M ipods 150M Songs Semantic Music Discovery Engine 50M Customers 27K Record Labels 31% Americans 4

5 Talk Outline Age of Music Proliferation - Sec. 1.1 Music Search & Discovery - Sec. 1.2 Semantic Music Discovery Engine - Sec. 1.3 Collecting Music Information - Ch. 3, 4 Autotagging System - Ch. 2 CAL Music Discovery Engine - Sec. 1.4 Concluding Remarks - Ch. 5 5

6 Music Search Search - retrieving specific audio content Common Paradigms: 1. Query-by-Metadata 2. Query-by-Performance 3. Query-by-Fingerprint 6

7 Music Discovery Discovery - finding new music or relationships Common Paradigms: 1. Recommendation-by-Popularity 2. Browse-by-Genre 3. Query-by-Similarity Acoustic Social Semantic 4. Query-by-Description 7

8 Semantic Music Discovery Engine Index music with tags so that it can be retrieved using a semantic description Tag - a short text-based token mellow, classic rock, acoustic slide guitar real-valued weight strength of association Semantic - use meaningful words to describe music mellow classic rock that sounds like the Beatles and features an acoustic slide guitar akin to Internet Search Engines 8

9 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Data Sources Audio Tracks Metadata Tags Web-documents Music Processing System Surveys Audio Characteristics Annotation Games Autotagging System Autotags Internet Music Sites Text-mining System Analytic Systems Automatic Annotation Human Annotation Music Information Index Discovery Engine Search Engine Internet Radio Social Network 9

10 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Audio Tracks Metadata Data Sources Human Annotation 10

11 Music Last.fm - 150M songs by 16M artists CAL songs by 500 artist Long Tail Economics - Chris Anderson (2004) Popularity Short Tail - Popular Long Tail - Obscure Songs Cold Start Problem - Songs in the long tail are not annotated and thus can not be discovered. 11

12 Metadata Factual information about music song, album, artist, record label year, biographical, charts heterogeneous data strings, numbers, images, graphs 12

13 Metadata 13

14 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Audio Tracks Music Processing System Metadata Audio Characteristics Data Sources Human Annotation Analytic Systems Automatic Annotation 14

15 Music Processing Systems Information extracted from audio signal Acoustic - noise, roughness Rhythmic - tempo, patterns Harmonic - key, major/minor Structural - chorus locations 15

16 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Surveys Annotation Games Audio Tracks Metadata Tags Music Processing System Audio Characteristics Internet Music Sites Data Sources Human Annotation 16

17 Surveys Pandora Music Genome Project 400 Objective Genes 50 trained music experts 750,000 songs annotated 17

18 Surveys CAL500 Survey 174-tag vocab - genre, emotion, Paid 55 undergrads to annotate music for 120 hours 500 songs annotated by 3 people 18

19 Human Annotations Conducting a survey Reliable, Precise, Tailored to Application X Expensive, Laborious, Not Scalable 19

20 Annotation Games Human-Computation Web-based, multi-player game with real-time interaction Player contribute useful annotations through game play ESPGame for images [Von Ahn] Listen Game for songs 20

21 Listen Game 21

22 Human Annotation Survey Reliable, Precise, Tailored to Application X Expensive, Laborious, Not Scalable Annotation Game Cheap, Scalable, Precise, Personalized X Need to create a viral user experience 22

23 Music Web Sites 1. Social Tagging Site Users annotate music with tags Last.fm - 960K distinct tags 23

24 Music Web Sites 2. Collecting Web Documents Song & Album Reviews Artist Biographies Music Blogs, Discussion Boards Allmusic, Rolling Stone, Amazon, Mog 24

25 Web Documents Genres: Funk (3) Funk-metal Funk-rock Pop Rap Vocals: Nasal Staccato Enunciation Distinctive vocals Instruments: Guitar Bass Jew s-harp Adjective: Hard-rocking (2) Noisy Scratchy Sliding Positive vibes 25

26 Collecting an Annotated Music Corpus Survey Reliable, Precise, Tailored to Application X Expensive, Laborious, Not Scalable Annotation Game Cheap, Scalable, Precise, Personalized X Need to create a viral user experience Music Web Sites Cheap, Annotations for short-tail X Noisy, long-tail is poorly represented 26

27 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Surveys Annotation Games Audio Tracks Metadata Tags Music Processing System Audio Characteristics Autotagging System Autotags Internet Music Sites Analytic Systems Automatic Annotation Data Sources Human Annotation 27

28 Autotagging System Our goal is to build a system that can 1. Annotate a song with meaningful tags 2. Retrieve songs given a text-based query Frank Sinatra Fly Me to the Moon Annotation Retrieval Jazz Male Vocals Sad Slow Tempo Plan: Learn a probabilistic model that captures a relationship between audio content and tags. 28

29 System Overview Data Representation Modeling Evaluation Training Data Vocabulary Parametric Model T T Annotation Annotation Vectors Audio Feature Extraction Parameter Estimation Novel Song Evaluation (annotation) Music Review Inference Text Query (retrieval) 29

30 Semantic Representation Choose vocabulary of musically relevant tags Instruments, Genre, Emotion, Vocal, Usages Annotations are converted to a real-valued vector Semantic association between a tag and a song Example: Frank Sinatra s Fly Me to the Moon Vocab = {funk, jazz, guitar, sad, female vocals} y = [0/4, 3/4, 4/4, 2/4, 0/4] 30

31 Acoustic Representation Each song is represented as a bag-of-feature-vectors Pass a short time window over the audio signal Extract a feature vector for each short-time audio segment Ignore temporal relationships of time series X = x 1, x 2, x,..., x 3 t 31

32 Audio Features We calculate MFCCDeltas feature vectors Mel-frequency Cepstral Coefficients (MFCC) Low dimensional representation short-term spectrum Popular for both representing speech, music, and sound effects Instantaneous derivatives (deltas) encode short-time temporal info 5, dimensional vectors per minute Numerous other audio representations Spectral features, modulation spectra, chromagrams, 32

33 Statistical Model Supervised Multi-class Labeling model One Gaussian Mixture Model (GMM) per tag - p(x t) Key Idea: GMM trained with songs associated with tag Notes: Developed for image annotation [Carneiro & Vasconcelos 05] Scalable and Parallelizable Modified for real-value weights rather than binary labels Extended formulation to handle multi-tag queries 33

34 34 Modeling a Song EM Bag of MFCC vectors Algorithm 1. Segment audio signals 2. Extract short-time feature vectors 3. Estimate GMM with EM algorithm

35 Modeling a Tag Algorithm: 1. Identify songs associated with tag t 2. Estimate a song GMM for each song - p(x s) 3. Use the Mixture Hierarchies EM algorithm [Vasconcelos01] Learn a mixture of mixture components romantic Standard EM romantic Mixture Hierarchies EM Tag Model p(x t) Benefits Computationally efficient for parameter estimation and inference Smoothed song representation better density estimate 35

36 Assuming Annotation Given a novel song X = {x 1,, x T }, calculate 1. Uniform tag prior 2. Vectors are conditionally independent given a tag 3. Geometric average of likelihoods 4. Tags are mutually exclusive and exhaustive Semantic Multinomial: P(t X) s multinomial distribution over the tag vocabulary Annotation: peaks of multinomial 36

37 Annotation Semantic Multinomial for Give it Away by the Red Hot Chili Peppers P(t X) 37

38 Annotation: Automatic Music Reviews Dr. Dre (feat. Snoop Dogg) - Nuthin' but a 'G' thang This is a dance poppy, hip-hop song that is arousing and exciting. It features drum machine, backing vocals, male vocal, a nice acoustic guitar solo, and rapping, strong vocals. It is a song that is very danceable and with a heavy beat that you might like listen to while at a party. Frank Sinatra - Fly me to the moon This is a jazzy, singer / songwriter song that is calming and sad. It features acoustic guitar, piano, saxophone, a nice male vocal solo, and emotional, high-pitched vocals. It is a song with a light beat and a slow tempo that you might like listen to while hanging with friends. 38

39 Retrieval 1. Annotate each song in corpus with a semantic multinomial p p = {P(t 1 X),, P(t V X)} 2. Given a text-based query, construct a query multinomial q q i = 1/ t, if tag t appears in the query string q i = 0, otherwise 3. Rank all songs by the Kullback-Leibler (KL) divergence 39

40 Retrieval Query: a tender pop song with female vocals 0.33 Query Multinomial tender pop female vocals Shakira - The One Alicia Keyes - Fallin Evanescence - My Immortal 40

41 Retrieval Query Retrieved Songs Tender Crosby, Stills and Nash - Guinevere Jewel - Enter from the East Art Tatum - Willow Weep for Me Female Vocals Alicia Keys - Fallin Shakira - The One Junior Murvin - Police and Thieves Tender AND Female Vocals Jewel - Enter from the East Evanescence - My Immortal Cowboy Junkies - Postcard Blues 41

42 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Surveys Annotation Games Internet Music Sites Data Sources Audio Tracks Metadata Tags Web-documents Music Processing System Audio Characteristics Autotagging System Autotags Text-mining System Analytic Systems Automatic Annotation Human Annotation 42

43 Text-mining System Relevance Scoring [Knees 08] site-specific queries Amazon, AMG, Billboards, etc. weight-based approach Step 1: Collect Corpus For each song, use a search engine to retrieve web pages: site:<website> <artist> music site:<website> <artist> <album> music review site:<website> <artist> <song> music review Maintain I s,d = mapping of songs to documents 43

44 Text-mining System Step 2: Autotag songs For each tag t: 1. Query corpus with tag t to find relevant documents w t,d relevance score for document d 2. For each song s, sum relevance scores for documents that are related to song s w s,t = Σ d I s,d w t,d 44

45 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Surveys Annotation Games Internet Music Sites Data Sources Audio Tracks Metadata Tags Web-documents Music Processing System Audio Characteristics Autotagging System Autotags Text-mining System Analytic Systems Automatic Annotation Human Annotation Music Information Index 45

46 Comparing Tags Groundtruth CAL500 - binary labeling of song-tag pairs Long Tail - subset of 87 obscure songs Approaches 1. Social Tags - Last.fm 2. Annotation Game - Listen Game 3. Web Autotags - Site-specific relevance scoring 4. Audio Autotags - SML model w/ MFCCs 46

47 Comparing Tags For each approach: For each tag: 1. Rank songs 2. Calculate Area under the ROC curve (AROC) 0.5 random ranking (Bad) 1.0 perfect ranking (Good) Calculate mean AROC 47

48 Comparing Tags Approach Songs AROC Social Tags Game Web Autotags Audio Autotags CAL Long Tail 0.54 CAL Long Tail * CAL Long Tail 0.56 CAL Long Tail

49 Combining Tags Approaches 1. Autotagging - single best approach 2. Best Rank Interleaving 3. Isotonic Regression - [Zadrozny 02] 4. RankBoost - [Freund03] 49

50 Combining Tags Approach Audio Autotags Best Rank Interleaving Isotonic Regression AROC RankBoost

51 Semantic Music Discovery Engine Discovery Extraction Collection Artists & Record Labels Surveys Annotation Games Internet Music Sites Data Sources Audio Tracks Metadata Tags Web-documents Music Processing System Audio Characteristics Autotagging System Autotags Text-mining System Analytic Systems Automatic Annotation Human Annotation Music Information Index Discovery Engine Search Engine Internet Radio Social Network 51

52 CAL Music Discovery Engine 52

53 CAL Music Discovery Engine 53

54 Research Challenges What s on tap 1. Explore music similarity with semantics 2. Explore discriminative approaches [Eck 07] 3. Combine heterogeneous data sources Game Data, Social Networks, Web Documents, Popularity Info 4. Focus on person rather than population Demographic and Psychographic Groups Individuals Emotional states of an Individual 54

55 References Semantic Annotation and Retrieval [IEEE TASLP 08, SIGIR 07, ISMIR08?] Music Annotation Games [ISMIR 07a] Related: Query-by-Semantic-Similarity [ICASSP 07, MIREX 07] Tag Vocabulary Selection with Sparce CCA [ISMIR 07b] Supervised Music Boundary Detection [ISMIR 07c] Work-in-Progress: 1. Combining Tags from Multiple Sources Rank Aggregation, Kernel Combination [ISMIR 08?] 2. Music Similarity with Semantics 3. (More Social) Music Annotation Games 55

56 Thanks Gert, Charles, Lawrence, Shlomo, Serge, Sanjoy Advice and perspective Gary Cottrell, Virginia de Sa, IGERT Enabling creative and interdisciplinary pursuits Damien O malley, Aron Tremble, VLC Thinking beyond the walls of academia Luke Barrington, Antoni Chan, David Torres Friends and collaborators 56

57 Talking about music is like dancing about architecture it s a really stupid thing to want to do - Elvis Costello and others Douglas Turnbull Computer Audition Laboratory UC San Diego dturnbul@cs.ucsd.edu cs.ucsd.edu/~dturnbul 57

58 Design and Development of a Semantic Music Discovery Engine Douglas Turnbull Ph.D. Thesis Defense University of California, San Diego Committee: Gert Lanckriet, Charles Elkan, Lawrence Saul, Shlomo Dubnov, Serge Belongie, Sanjoy Dasgupta May 7,

59 The Age of Music Proliferation Production: 5M artist pages - 150M distinct songs - Distribution 1.5M simultaneous P2P users (Feb 01) - 27K record labels - 4B songs to 50M customers - Consumption 11M Internet radio users - 110M ipods sold - 59

60 Quantifying Retrieval Rank order test set songs KL between a query multinomial and semantic multinomials 1-, 2-, 3-word queries with 5 or more examples Metric: Area under the ROC Curve (AROC) Rank by Romantic Rank Label TP FP 1/ R - 1/2 1/ R /3 2/3 1 0 True Positive Rate 1 1 AROC = 5/6 False Positive Rate Mean AROC is the average AROC over a large number of queries. 60

61 Comparing Tags Approach Songs Density AROC Ground Truth CAL500 Social Tags Last.fm Game Listen Game Web Autotags Audio Autotags All Long-Tail All Long-Tail All Long-Tail * * All Long-Tail All Long-Tail

62 Music & Technology Technology is changing how music is produced, distributed, promoted and consumed. 62

http://www.xkcd.com/655/ Audio Retrieval David Kauchak cs160 Fall 2009 Thanks to Doug Turnbull for some of the slides Administrative CS Colloquium vs. Wed. before Thanksgiving producers consumers 8M artists

More information

Music Information Retrieval Community

Music Information Retrieval Community Music Information Retrieval Community What: Developing systems that retrieve music When: Late 1990 s to Present Where: ISMIR - conference started in 2000 Why: lots of digital music, lots of music lovers,

More information

MODELS of music begin with a representation of the

MODELS of music begin with a representation of the 602 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 Modeling Music as a Dynamic Texture Luke Barrington, Student Member, IEEE, Antoni B. Chan, Member, IEEE, and

More information

UC San Diego UC San Diego Electronic Theses and Dissertations

UC San Diego UC San Diego Electronic Theses and Dissertations UC San Diego UC San Diego Electronic Theses and Dissertations Title Design and development of a semantic music discovery engine Permalink https://escholarship.org/uc/item/6946w0b0 Author Turnbull, Douglas

More information

Time Series Models for Semantic Music Annotation Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet

Time Series Models for Semantic Music Annotation Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY 2011 1343 Time Series Models for Semantic Music Annotation Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet Abstract

More information

MUSI-6201 Computational Music Analysis

MUSI-6201 Computational Music Analysis MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)

More information

USING ARTIST SIMILARITY TO PROPAGATE SEMANTIC INFORMATION

USING ARTIST SIMILARITY TO PROPAGATE SEMANTIC INFORMATION USING ARTIST SIMILARITY TO PROPAGATE SEMANTIC INFORMATION Joon Hee Kim, Brian Tomasik, Douglas Turnbull Department of Computer Science, Swarthmore College {joonhee.kim@alum, btomasi1@alum, turnbull@cs}.swarthmore.edu

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Content-based music retrieval

Content-based music retrieval Music retrieval 1 Music retrieval 2 Content-based music retrieval Music information retrieval (MIR) is currently an active research area See proceedings of ISMIR conference and annual MIREX evaluations

More information

Music Information Retrieval

Music Information Retrieval CTP 431 Music and Audio Computing Music Information Retrieval Graduate School of Culture Technology (GSCT) Juhan Nam 1 Introduction ü Instrument: Piano ü Composer: Chopin ü Key: E-minor ü Melody - ELO

More information

Autotagger: A Model For Predicting Social Tags from Acoustic Features on Large Music Databases

Autotagger: A Model For Predicting Social Tags from Acoustic Features on Large Music Databases Autotagger: A Model For Predicting Social Tags from Acoustic Features on Large Music Databases Thierry Bertin-Mahieux University of Montreal Montreal, CAN bertinmt@iro.umontreal.ca François Maillet University

More information

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for

More information

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? NICHOLAS BORG AND GEORGE HOKKANEN Abstract. The possibility of a hit song prediction algorithm is both academically interesting and industry motivated.

More information

Supervised Learning in Genre Classification

Supervised Learning in Genre Classification Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music

More information

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST) Computational Models of Music Similarity 1 Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST) Abstract The perceived similarity of two pieces of music is multi-dimensional,

More information

Towards Musical Query-by-Semantic-Description using the CAL500 Data Set

Towards Musical Query-by-Semantic-Description using the CAL500 Data Set Towards Musical Query-by-Semantic-Description using the CAL500 Data Set ABSTRACT Query-by-semantic-description (QBSD) is a natural and familiar paradigm for retrieving content from large databases of music.

More information

Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio

Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio Predicting Time-Varying Musical Emotion Distributions from Multi-Track Audio Jeffrey Scott, Erik M. Schmidt, Matthew Prockup, Brandon Morton, and Youngmoo E. Kim Music and Entertainment Technology Laboratory

More information

Automatic Music Similarity Assessment and Recommendation. A Thesis. Submitted to the Faculty. Drexel University. Donald Shaul Williamson

Automatic Music Similarity Assessment and Recommendation. A Thesis. Submitted to the Faculty. Drexel University. Donald Shaul Williamson Automatic Music Similarity Assessment and Recommendation A Thesis Submitted to the Faculty of Drexel University by Donald Shaul Williamson in partial fulfillment of the requirements for the degree of Master

More information

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval DAY 1 Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval Jay LeBoeuf Imagine Research jay{at}imagine-research.com Rebecca

More information

Music Recommendation from Song Sets

Music Recommendation from Song Sets Music Recommendation from Song Sets Beth Logan Cambridge Research Laboratory HP Laboratories Cambridge HPL-2004-148 August 30, 2004* E-mail: Beth.Logan@hp.com music analysis, information retrieval, multimedia

More information

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

Music Emotion Recognition. Jaesung Lee. Chung-Ang University Music Emotion Recognition Jaesung Lee Chung-Ang University Introduction Searching Music in Music Information Retrieval Some information about target music is available Query by Text: Title, Artist, or

More information

Combining Audio Content and Social Context for Semantic Music Discovery

Combining Audio Content and Social Context for Semantic Music Discovery Combining Audio Content and Social Context for Semantic Music Discovery ABSTRACT Douglas Turnbull Computer Science Department Swarthmore College Swarthmore, PA, USA turnbull@cs.swarthmore.edu When attempting

More information

On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices

On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices Yasunori Ohishi 1 Masataka Goto 3 Katunobu Itou 2 Kazuya Takeda 1 1 Graduate School of Information Science, Nagoya University,

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

Using Genre Classification to Make Content-based Music Recommendations

Using Genre Classification to Make Content-based Music Recommendations Using Genre Classification to Make Content-based Music Recommendations Robbie Jones (rmjones@stanford.edu) and Karen Lu (karenlu@stanford.edu) CS 221, Autumn 2016 Stanford University I. Introduction Our

More information

Lecture 15: Research at LabROSA

Lecture 15: Research at LabROSA ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 15: Research at LabROSA 1. Sources, Mixtures, & Perception 2. Spatial Filtering 3. Time-Frequency Masking 4. Model-Based Separation Dan Ellis Dept. Electrical

More information

Classification of Timbre Similarity

Classification of Timbre Similarity Classification of Timbre Similarity Corey Kereliuk McGill University March 15, 2007 1 / 16 1 Definition of Timbre What Timbre is Not What Timbre is A 2-dimensional Timbre Space 2 3 Considerations Common

More information

CTP431- Music and Audio Computing Music Information Retrieval. Graduate School of Culture Technology KAIST Juhan Nam

CTP431- Music and Audio Computing Music Information Retrieval. Graduate School of Culture Technology KAIST Juhan Nam CTP431- Music and Audio Computing Music Information Retrieval Graduate School of Culture Technology KAIST Juhan Nam 1 Introduction ü Instrument: Piano ü Genre: Classical ü Composer: Chopin ü Key: E-minor

More information

A New Method for Calculating Music Similarity

A New Method for Calculating Music Similarity A New Method for Calculating Music Similarity Eric Battenberg and Vijay Ullal December 12, 2006 Abstract We introduce a new technique for calculating the perceived similarity of two songs based on their

More information

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski Music Mood Classification - an SVM based approach Sebastian Napiorkowski Topics on Computer Music (Seminar Report) HPAC - RWTH - SS2015 Contents 1. Motivation 2. Quantification and Definition of Mood 3.

More information

Learning to Tag from Open Vocabulary Labels

Learning to Tag from Open Vocabulary Labels Learning to Tag from Open Vocabulary Labels Edith Law, Burr Settles, and Tom Mitchell Machine Learning Department Carnegie Mellon University {elaw,bsettles,tom.mitchell}@cs.cmu.edu Abstract. Most approaches

More information

Features for Audio and Music Classification

Features for Audio and Music Classification Features for Audio and Music Classification Martin F. McKinney and Jeroen Breebaart Auditory and Multisensory Perception, Digital Signal Processing Group Philips Research Laboratories Eindhoven, The Netherlands

More information

Music Genre Classification and Variance Comparison on Number of Genres

Music Genre Classification and Variance Comparison on Number of Genres Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques

More information

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION Halfdan Rump, Shigeki Miyabe, Emiru Tsunoo, Nobukata Ono, Shigeki Sagama The University of Tokyo, Graduate

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Singer Traits Identification using Deep Neural Network

Singer Traits Identification using Deep Neural Network Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic

More information

A Survey of Audio-Based Music Classification and Annotation

A Survey of Audio-Based Music Classification and Annotation A Survey of Audio-Based Music Classification and Annotation Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang IEEE Trans. on Multimedia, vol. 13, no. 2, April 2011 presenter: Yin-Tzu Lin ( 阿孜孜 ^.^)

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Types of music SPEAKING

Types of music SPEAKING Types of music SPEAKING ENG_B1.2.0303S Types of Music Outline Content In this lesson you will learn about the different types of music. What kinds of music do you like and dislike? Do you enjoy going to

More information

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES Jun Wu, Yu Kitano, Stanislaw Andrzej Raczynski, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono and Shigeki Sagayama The Graduate

More information

SIGNAL + CONTEXT = BETTER CLASSIFICATION

SIGNAL + CONTEXT = BETTER CLASSIFICATION SIGNAL + CONTEXT = BETTER CLASSIFICATION Jean-Julien Aucouturier Grad. School of Arts and Sciences The University of Tokyo, Japan François Pachet, Pierre Roy, Anthony Beurivé SONY CSL Paris 6 rue Amyot,

More information

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based

More information

Release Year Prediction for Songs

Release Year Prediction for Songs Release Year Prediction for Songs [CSE 258 Assignment 2] Ruyu Tan University of California San Diego PID: A53099216 rut003@ucsd.edu Jiaying Liu University of California San Diego PID: A53107720 jil672@ucsd.edu

More information

ISMIR 2008 Session 2a Music Recommendation and Organization

ISMIR 2008 Session 2a Music Recommendation and Organization A COMPARISON OF SIGNAL-BASED MUSIC RECOMMENDATION TO GENRE LABELS, COLLABORATIVE FILTERING, MUSICOLOGICAL ANALYSIS, HUMAN RECOMMENDATION, AND RANDOM BASELINE Terence Magno Cooper Union magno.nyc@gmail.com

More information

Tempo and Beat Analysis

Tempo and Beat Analysis Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:

More information

GCT535- Sound Technology for Multimedia Timbre Analysis. Graduate School of Culture Technology KAIST Juhan Nam

GCT535- Sound Technology for Multimedia Timbre Analysis. Graduate School of Culture Technology KAIST Juhan Nam GCT535- Sound Technology for Multimedia Timbre Analysis Graduate School of Culture Technology KAIST Juhan Nam 1 Outlines Timbre Analysis Definition of Timbre Timbre Features Zero-crossing rate Spectral

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional

More information

Music Processing Audio Retrieval Meinard Müller

Music Processing Audio Retrieval Meinard Müller Lecture Music Processing Audio Retrieval Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

Music Information Retrieval. Juan P Bello

Music Information Retrieval. Juan P Bello Music Information Retrieval Juan P Bello What is MIR? Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox 1803707 knoxm@eecs.berkeley.edu December 1, 006 Abstract We built a system to automatically detect laughter from acoustic features of audio. To implement the system,

More information

TOWARDS TIME-VARYING MUSIC AUTO-TAGGING BASED ON CAL500 EXPANSION

TOWARDS TIME-VARYING MUSIC AUTO-TAGGING BASED ON CAL500 EXPANSION TOWARDS TIME-VARYING MUSIC AUTO-TAGGING BASED ON CAL500 EXPANSION Shuo-Yang Wang 1, Ju-Chiang Wang 1,2, Yi-Hsuan Yang 1, and Hsin-Min Wang 1 1 Academia Sinica, Taipei, Taiwan 2 University of California,

More information

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION Jordan Hochenbaum 1,2 New Zealand School of Music 1 PO Box 2332 Wellington 6140, New Zealand hochenjord@myvuw.ac.nz

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Creating a Feature Vector to Identify Similarity between MIDI Files

Creating a Feature Vector to Identify Similarity between MIDI Files Creating a Feature Vector to Identify Similarity between MIDI Files Joseph Stroud 2017 Honors Thesis Advised by Sergio Alvarez Computer Science Department, Boston College 1 Abstract Today there are many

More information

Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University

Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University Can Song Lyrics Predict Genre? Danny Diekroeger Stanford University danny1@stanford.edu 1. Motivation and Goal Music has long been a way for people to express their emotions. And because we all have a

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function EE391 Special Report (Spring 25) Automatic Chord Recognition Using A Summary Autocorrelation Function Advisor: Professor Julius Smith Kyogu Lee Center for Computer Research in Music and Acoustics (CCRMA)

More information

Automatic Music Genre Classification

Automatic Music Genre Classification Automatic Music Genre Classification Nathan YongHoon Kwon, SUNY Binghamton Ingrid Tchakoua, Jackson State University Matthew Pietrosanu, University of Alberta Freya Fu, Colorado State University Yue Wang,

More information

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

STRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY

STRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY STRUCTURAL CHANGE ON MULTIPLE TIME SCALES AS A CORRELATE OF MUSICAL COMPLEXITY Matthias Mauch Mark Levy Last.fm, Karen House, 1 11 Bache s Street, London, N1 6DL. United Kingdom. matthias@last.fm mark@last.fm

More information

/$ IEEE

/$ IEEE 564 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 18, NO. 3, MARCH 2010 Source/Filter Model for Unsupervised Main Melody Extraction From Polyphonic Audio Signals Jean-Louis Durrieu,

More information

Effects of acoustic degradations on cover song recognition

Effects of acoustic degradations on cover song recognition Signal Processing in Acoustics: Paper 68 Effects of acoustic degradations on cover song recognition Julien Osmalskyj (a), Jean-Jacques Embrechts (b) (a) University of Liège, Belgium, josmalsky@ulg.ac.be

More information

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Fengyan Wu fengyanyy@163.com Shutao Sun stsun@cuc.edu.cn Weiyao Xue Wyxue_std@163.com Abstract Automatic extraction of

More information

GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA

GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA GENDER IDENTIFICATION AND AGE ESTIMATION OF USERS BASED ON MUSIC METADATA Ming-Ju Wu Computer Science Department National Tsing Hua University Hsinchu, Taiwan brian.wu@mirlab.org Jyh-Shing Roger Jang Computer

More information

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution. CS 229 FINAL PROJECT A SOUNDHOUND FOR THE SOUNDS OF HOUNDS WEAKLY SUPERVISED MODELING OF ANIMAL SOUNDS ROBERT COLCORD, ETHAN GELLER, MATTHEW HORTON Abstract: We propose a hybrid approach to generating

More information

MUSIC tags are descriptive keywords that convey various

MUSIC tags are descriptive keywords that convey various JOURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 The Effects of Noisy Labels on Deep Convolutional Neural Networks for Music Tagging Keunwoo Choi, György Fazekas, Member, IEEE, Kyunghyun Cho,

More information

Introductions to Music Information Retrieval

Introductions to Music Information Retrieval Introductions to Music Information Retrieval ECE 272/472 Audio Signal Processing Bochen Li University of Rochester Wish List For music learners/performers While I play the piano, turn the page for me Tell

More information

Breakscience. Technological and Musicological Research in Hardcore, Jungle, and Drum & Bass

Breakscience. Technological and Musicological Research in Hardcore, Jungle, and Drum & Bass Breakscience Technological and Musicological Research in Hardcore, Jungle, and Drum & Bass Jason A. Hockman PhD Candidate, Music Technology Area McGill University, Montréal, Canada Overview 1 2 3 Hardcore,

More information

Statistical Modeling and Retrieval of Polyphonic Music

Statistical Modeling and Retrieval of Polyphonic Music Statistical Modeling and Retrieval of Polyphonic Music Erdem Unal Panayiotis G. Georgiou and Shrikanth S. Narayanan Speech Analysis and Interpretation Laboratory University of Southern California Los Angeles,

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

Chord Classification of an Audio Signal using Artificial Neural Network

Chord Classification of an Audio Signal using Artificial Neural Network Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Lyrics Classification using Naive Bayes

Lyrics Classification using Naive Bayes Lyrics Classification using Naive Bayes Dalibor Bužić *, Jasminka Dobša ** * College for Information Technologies, Klaićeva 7, Zagreb, Croatia ** Faculty of Organization and Informatics, Pavlinska 2, Varaždin,

More information

Lecture 9 Source Separation

Lecture 9 Source Separation 10420CS 573100 音樂資訊檢索 Music Information Retrieval Lecture 9 Source Separation Yi-Hsuan Yang Ph.D. http://www.citi.sinica.edu.tw/pages/yang/ yang@citi.sinica.edu.tw Music & Audio Computing Lab, Research

More information

Retrieval of textual song lyrics from sung inputs

Retrieval of textual song lyrics from sung inputs INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Retrieval of textual song lyrics from sung inputs Anna M. Kruspe Fraunhofer IDMT, Ilmenau, Germany kpe@idmt.fraunhofer.de Abstract Retrieving the

More information

A Categorical Approach for Recognizing Emotional Effects of Music

A Categorical Approach for Recognizing Emotional Effects of Music A Categorical Approach for Recognizing Emotional Effects of Music Mohsen Sahraei Ardakani 1 and Ehsan Arbabi School of Electrical and Computer Engineering, College of Engineering, University of Tehran,

More information

Unifying Low-level and High-level Music. Similarity Measures

Unifying Low-level and High-level Music. Similarity Measures Unifying Low-level and High-level Music 1 Similarity Measures Dmitry Bogdanov, Joan Serrà, Nicolas Wack, Perfecto Herrera, and Xavier Serra Abstract Measuring music similarity is essential for multimedia

More information

The Million Song Dataset

The Million Song Dataset The Million Song Dataset AUDIO FEATURES The Million Song Dataset There is no data like more data Bob Mercer of IBM (1985). T. Bertin-Mahieux, D.P.W. Ellis, B. Whitman, P. Lamere, The Million Song Dataset,

More information

Automatic Music Clustering using Audio Attributes

Automatic Music Clustering using Audio Attributes Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

Singer Identification

Singer Identification Singer Identification Bertrand SCHERRER McGill University March 15, 2007 Bertrand SCHERRER (McGill University) Singer Identification March 15, 2007 1 / 27 Outline 1 Introduction Applications Challenges

More information

Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity

Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity Holger Kirchhoff 1, Simon Dixon 1, and Anssi Klapuri 2 1 Centre for Digital Music, Queen Mary University

More information

Music Similarity and Cover Song Identification: The Case of Jazz

Music Similarity and Cover Song Identification: The Case of Jazz Music Similarity and Cover Song Identification: The Case of Jazz Simon Dixon and Peter Foster s.e.dixon@qmul.ac.uk Centre for Digital Music School of Electronic Engineering and Computer Science Queen Mary

More information

Tempo and Beat Tracking

Tempo and Beat Tracking Tutorial Automatisierte Methoden der Musikverarbeitung 47. Jahrestagung der Gesellschaft für Informatik Tempo and Beat Tracking Meinard Müller, Christof Weiss, Stefan Balke International Audio Laboratories

More information

Semi-supervised Musical Instrument Recognition

Semi-supervised Musical Instrument Recognition Semi-supervised Musical Instrument Recognition Master s Thesis Presentation Aleksandr Diment 1 1 Tampere niversity of Technology, Finland Supervisors: Adj.Prof. Tuomas Virtanen, MSc Toni Heittola 17 May

More information

Information storage & retrieval systems Audiovisual materials

Information storage & retrieval systems Audiovisual materials Jonathan B. Moore. Evaluating the spectral clustering segmentation algorithm for describing diverse music collections. A Master s Paper for the M.S. in L.S degree. May, 2016. 104 pages. Advisor: Stephanie

More information

Recognition and Summarization of Chord Progressions and Their Application to Music Information Retrieval

Recognition and Summarization of Chord Progressions and Their Application to Music Information Retrieval Recognition and Summarization of Chord Progressions and Their Application to Music Information Retrieval Yi Yu, Roger Zimmermann, Ye Wang School of Computing National University of Singapore Singapore

More information

Music Genre Classification

Music Genre Classification Music Genre Classification chunya25 Fall 2017 1 Introduction A genre is defined as a category of artistic composition, characterized by similarities in form, style, or subject matter. [1] Some researchers

More information

Singer Recognition and Modeling Singer Error

Singer Recognition and Modeling Singer Error Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing

More information

Learning Word Meanings and Descriptive Parameter Spaces from Music. Brian Whitman, Deb Roy and Barry Vercoe MIT Media Lab

Learning Word Meanings and Descriptive Parameter Spaces from Music. Brian Whitman, Deb Roy and Barry Vercoe MIT Media Lab Learning Word Meanings and Descriptive Parameter Spaces from Music Brian Whitman, Deb Roy and Barry Vercoe MIT Media Lab Music intelligence Structure Structure Genre Genre / / Style Style ID ID Song Song

More information

HIT SONG SCIENCE IS NOT YET A SCIENCE

HIT SONG SCIENCE IS NOT YET A SCIENCE HIT SONG SCIENCE IS NOT YET A SCIENCE François Pachet Sony CSL pachet@csl.sony.fr Pierre Roy Sony CSL roy@csl.sony.fr ABSTRACT We describe a large-scale experiment aiming at validating the hypothesis that

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AN HMM BASED INVESTIGATION OF DIFFERENCES BETWEEN MUSICAL INSTRUMENTS OF THE SAME TYPE PACS: 43.75.-z Eichner, Matthias; Wolff, Matthias;

More information

Musical Hit Detection

Musical Hit Detection Musical Hit Detection CS 229 Project Milestone Report Eleanor Crane Sarah Houts Kiran Murthy December 12, 2008 1 Problem Statement Musical visualizers are programs that process audio input in order to

More information

An Examination of Foote s Self-Similarity Method

An Examination of Foote s Self-Similarity Method WINTER 2001 MUS 220D Units: 4 An Examination of Foote s Self-Similarity Method Unjung Nam The study is based on my dissertation proposal. Its purpose is to improve my understanding of the feature extractors

More information

Detecting Musical Key with Supervised Learning

Detecting Musical Key with Supervised Learning Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different

More information

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION

A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION A CLASSIFICATION APPROACH TO MELODY TRANSCRIPTION Graham E. Poliner and Daniel P.W. Ellis LabROSA, Dept. of Electrical Engineering Columbia University, New York NY 127 USA {graham,dpwe}@ee.columbia.edu

More information

Singing Pitch Extraction and Singing Voice Separation

Singing Pitch Extraction and Singing Voice Separation Singing Pitch Extraction and Singing Voice Separation Advisor: Jyh-Shing Roger Jang Presenter: Chao-Ling Hsu Multimedia Information Retrieval Lab (MIR) Department of Computer Science National Tsing Hua

More information

Recognising Cello Performers using Timbre Models

Recognising Cello Performers using Timbre Models Recognising Cello Performers using Timbre Models Chudy, Magdalena; Dixon, Simon For additional information about this publication click this link. http://qmro.qmul.ac.uk/jspui/handle/123456789/5013 Information

More information

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC Vaiva Imbrasaitė, Peter Robinson Computer Laboratory, University of Cambridge, UK Vaiva.Imbrasaite@cl.cam.ac.uk

More information

TEMPORAL MUSIC CONTEXT IDENTIFICATION WITH USER LISTENING DATA

TEMPORAL MUSIC CONTEXT IDENTIFICATION WITH USER LISTENING DATA TEMPORAL MUSIC CONTEXT IDENTIFICATION WITH USER LISTENING DATA Cameron Summers Gracenote csummers@gracenote.com Phillip Popp Gracenote ppopp@gracenote.com ABSTRACT The times when music is played can indicate

More information