Beeld en Geluid. Lorem ipsum dolor sit amet. Consectetur adipisicing elit. Sed do eiusmod tempor incididunt ut labore. Et dolore magna aliqua

Similar documents
If You Wanna Be My Lover A Hook Discovery Game to Uncover Individual Differences in Long-term Musical Memory

Audio Feature Extraction for Corpus Analysis

Earworms from three angles

Published in: Proceedings of the 14th International Society for Music Information Retrieval Conference

FANTASTIC: A Feature Analysis Toolbox for corpus-based cognitive research on the perception of popular music

CORPUS ANALYSIS TOOLS FOR COMPUTATIONAL HOOK DISCOVERY

Predicting Variation of Folk Songs: A Corpus Analysis Study on the Memorability of Melodies Janssen, B.D.; Burgoyne, J.A.; Honing, H.J.

Perceptual dimensions of short audio clips and corresponding timbre features

Investigating Temporal and Melodic Aspects of Musical Imagery

MUSI-6201 Computational Music Analysis

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

EXPLAINING AND PREDICTING THE PERCEPTION OF MUSICAL STRUCTURE

Modeling memory for melodies

Music to the Inner Ears: Exploring Individual Differences in Musical Imagery

gresearch Focus Cognitive Sciences

Chapter Two: Long-Term Memory for Timbre

Metamemory judgments for familiar and unfamiliar tunes

Perceiving Differences and Similarities in Music: Melodic Categorization During the First Years of Life

& Ψ. study guide. Music Psychology ... A guide for preparing to take the qualifying examination in music psychology.

You may need to log in to JSTOR to access the linked references.

Singing from the same sheet: A new approach to measuring tune similarity and its legal implications

The Human Features of Music.

Analysis of local and global timing and pitch change in ordinary

Music Mood. Sheng Xu, Albert Peyton, Ryan Bhular

Music Similarity and Cover Song Identification: The Case of Jazz

Musical Developmental Levels Self Study Guide

Connecticut Common Arts Assessment Initiative

Computational Modelling of Harmony

Audio: Generation & Extraction. Charu Jaiswal

Instrumental Music Curriculum

Music Information Retrieval with Temporal Features and Timbre

Paper Reference. Paper Reference(s) 6715/01 Edexcel GCE Music Technology Advanced Subsidiary Paper 01 (Unit 1b) Listening and Analysing

Outline. Why do we classify? Audio Classification

EXPECTANCY AND ATTENTION IN MELODY PERCEPTION

Expressive performance in music: Mapping acoustic cues onto facial expressions

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

Construction of a harmonic phrase

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

Music Information Retrieval

EMBODIED EFFECTS ON MUSICIANS MEMORY OF HIGHLY POLISHED PERFORMANCES

Grade 6 Music Curriculum Maps

Everyday Mysteries: Why songs get stuck in our heads

Automatic Analysis of Musical Lyrics

Pitch Perception. Roger Shepard

MUSICAL EAR TRAINING THROUGH ACTIVE MUSIC MAKING IN ADOLESCENT Cl USERS. The background ~

Nature Connects and Sean Kenney 2. The Nature Connects logo 3. Logo background colors 4. The single-color logo 5. The tagline logo 6

All worked up. Horny. Related Glossary Terms Drag related terms here. Index. Chapter 3 - The Male Strippers. Find Term

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

The purpose of this essay is to impart a basic vocabulary that you and your fellow

Using machine learning to decode the emotions expressed in music

Trevor de Clercq. Music Informatics Interest Group Meeting Society for Music Theory November 3, 2018 San Antonio, TX

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

MUSIC COURSE OF STUDY GRADES K-5 GRADE

Singer Traits Identification using Deep Neural Network

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

6 th Grade Instrumental Music Curriculum Essentials Document

Modeling perceived relationships between melody, harmony, and key

Chorus Cheat Sheet 7 Types of Choruses and How to Write Them. Part I: The Chorus

Chapter Five: The Elements of Music

Improving Piano Sight-Reading Skills of College Student. Chian yi Ang. Penn State University

Measuring the Facets of Musicality: The Goldsmiths Musical Sophistication Index. Daniel Müllensiefen Goldsmiths, University of London

Music Mood Classification - an SVM based approach. Sebastian Napiorkowski

Tempo and Beat Analysis

Kansas State Music Standards Ensembles

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

A Fine Arts Standards Guide for Families

Music Genre Classification and Variance Comparison on Number of Genres

An Experimental Analysis of the Role of Harmony in Musical Memory and the Categorization of Genre

To Link this Article: Vol. 7, No.1, January 2018, Pg. 1-11

Author's personal copy

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

BRANDBOOK STYLE 2017

Effects of Musical Training on Key and Harmony Perception

Speech Recognition and Signal Processing for Broadcast News Transcription

BRICK TOWNSHIP PUBLIC SCHOOLS (SUBJECT) CURRICULUM

Music Curriculum. Rationale. Grades 1 8

Beschrijving en corpusanalyse van populaire muziek (met een samenvatting in het Nederlands)

SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12

Music Information Retrieval Community

Audio Structure Analysis

FILLM IDENTITY TOOLKIT

Dynamic melody recognition: Distinctiveness and the role of musical expertise

Measuring a Measure: Absolute Time as a Factor in Meter Classification for Pop/Rock Music

Years 10 band plan Australian Curriculum: Music

An empirical field study on sing- along behaviour in the North of England

Higher National Unit Specification. General information. Unit title: Music: Songwriting (SCQF level 7) Unit code: J0MN 34. Unit purpose.

Florida Performing Fine Arts Assessment Item Specifications _Intermediate_Elementary_1_Responding

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

TABLE OF CONTENTS. One eye sees, the other feels. Principles, elements and tone Wordmark Color palatte Typefaces Studio influences 3,

An Integrated Music Chromaticism Model

Curriculum Framework for Performing Arts

A new tool for measuring musical sophistication: The Goldsmiths Musical Sophistication Index

Validity. What Is It? Types We Will Discuss. The degree to which an inference from a test score is appropriate or meaningful.

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

The Musicality of Non-Musicians: Measuring Musical Expertise in Britain

Transcription:

Beeld en Geluid Lorem ipsum dolor sit amet Consectetur adipisicing elit Sed do eiusmod tempor incididunt ut labore Et dolore magna aliqua

Hooked on Music John Ashley Burgoyne Music Cognition Group Institute for Logic, Language and Computation University of Amsterdam

Hooked on Music John Ashley Burgoyne Jan Van Balen Dimitrios Bountouridis Daniel Müllensiefen Frans Wiering Remco C. Veltkamp Henkjan Honing and thanks to Fleur Bouwer Maarten Brinkerink Aline Honingh Berit Janssen Richard Jong Themistoklis Karavellas Vincent Koops Laura Koppenburg Leendert van Maanen Han van der Maas Tobin May Jaap Murre Marieke Navin Erinma Ochu Johan Oomen Carlos Vaquero Bastiaan van der Weij

Henry Dan Cohen & Michael Rossato-Bennett 2014 Alive Inside

Long-term Musical Salience salience the absolute noticeability of something cf. distinctiveness (relative salience) musical what makes a bit of music stand out long-term what makes a bit of music stand out so much that it remains stored in long-term memory

Reminiscence Bumps Listeners Born Listeners Age 20 Years Rating 10 9 8 7 6 5 4 3 2 1 0 1950 1960 1970 1980 1990 2000 2010 5-Year Period Personal Memories Recognize Like critical period ages 15 25 multi-generational Parents Born Parents Age 20 Years parents and grandparents C. Krumhansl & J. Zupnick 2013 Cascading Reminiscence Bumps in Popular Music

Explicit vs. Implicit Memory short-term memory two sets of melodies some repeated Q: old or new? contradiction between explicit/implicit memory 418 Daniel Müllensiefen & Andrea R. Halpern THE ROLE OF FEATURES AND CONTEXT IN RECOGNITION OF NOVEL MELODIES DANIEL MÜLLENSIEFEN Goldsmiths, University of London, London, United Kingdom ANDREA R. HALPERN Bucknell University WE INVESTIGATED HOW WELL STRUCTURAL FEATURES such as note density or the relative number of changes in the melodic contour could predict success in implicit and explicit memory for unfamiliar melodies. We also analyzed which features are more likely to elicit increasingly confident judgments of old in a recognition memory task. An automated analysis program computed structural aspects of melodies, both independent of any context, and also with reference to the other melodies in the testset and the parent corpus of pop music. A few features predicted success in both memory tasks, which points to a shared memory component. However, motivic complexity compared to a large corpus of pop music had different effects on explicit and implicit memory. We also found that just a few features are associated with different rates of old judgments, whether the items were old or new. Rarer motives relative to the testset predicted hits and rarer motives relative to the corpus predicted false alarms. This datadriven analysis provides further support for both shared and separable mechanisms in implicit and explicit memory retrieval, as well as the role of distinctiveness in true and false judgments of familiarity. Received: February 2, 2013, accepted September 21, 2013. Key words: implicit vs. explicit memory, computational modeling, automatic music analysis,trueandfalse memories, distinctiveness R EMEMBERING MUSIC IS AN IMPORTANT PART of many people s lives, no matter what their musical background. In some ways, we have excellent memory for music. People maintain a large corpus of familiar tunes in their semantic memory. The representations are accurate in that someone can typically say if there is a wrong note in a familiar tune (Dowling, Bartlett, Halpern, & Andrews, 2008) and memory for tunes seems to last over one s lifetime (Bartlett & Snelus, 1981). On the other hand, encoding of new music is quite difficult (Halpern & Bartlett, 2010). Sometimes a tune sounds familiar but it turns out that it is only similar to one we knew in the past, creating false alarms. And people who have bought or downloaded some music only to discover the piece already in their collection have experienced the other kind of error, a miss. Explaining success and failures in memory for music by applying well-understood memory principles has not always been successful, which raises the question of whether memory for music is special or different from memory for other kinds of information. For instance, type of encoding task seems not to affect overall recognition performance for unfamiliar tunes (Halpern & Müllensiefen, 2008; Peretz, Gaudreau, & Bonnel, 1998) and musical expertise does not always increase this sort of recognition memory (Demorest, Morrison, Beken, & Jungbluth, 2008; Halpern, Bartlett, & Dowling, 1995). However, in common with other materials, familiar tunes are generally recognized more accurately than unfamiliar tunes (Bartlett, Halpern, & Dowling, 1995). These predictors are largely concerned with the encoding situation, state of the rememberer, and some general aspects of the to-be-remembered items. In contrast, our goal in this paper is to examine the extent to which two other factors can predict memorability of, in this case, real but unfamiliar pop tunes. One factor is the features of the tunes themselves. We take advantage of powerful statistical modeling techniques as well as automated feature extraction software to allow simultaneous evaluation of many features at the same time. This discovery-driven approach assumes that stimuli in the world are composed of many kinds of features, and that people can employ statistical learning to encode those features. People certainly employ statistical learning in procedural tasks, like learning artificial grammars (Pelucchi, Hay, & Saffran, 2009) and motor sequences (Daselaar, Rombouts, Veltman, Raaijmakers, & Jonker, 2003), regardless of whether the features are processed consciously or not. The feature approach is well established in memory research. For example, Cortese, Khanna, and Hacker (2010) looked at recognition memory for over 2500 monosyllabic words, taking as Music Perception, VOLUME 31, ISSUE 5, PP. 418 435, ISSN 0730-7829, ELECTRONIC ISSN 1533-8312. 2014 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA ALL RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUCE ARTICLE CONTENT THROUGH THE UNIVERSITY OF CALIFORNIA PRESS S RIGHTS AND PERMISSIONS WEBSITE, HTTP://WWW.UCPRESSJOURNALS.COM/REPRINTINFO.ASP. DOI:10.1525/MP.2014.31.5.418 D. Müllensiefen & A. Halpern 2014 The Role of Features in Context

Plinks trivia challenge PLINK: THIN SLICES OF MUSIC Thin Slices of Music 337 28 top songs of all time 400-ms music clips student participants 25-percent identification rate for artist and title CAROL L. KRUMHANSL Cornell University SHORT CLIPS (300 AND 400 MS), TAKEN FROM POPULAR songs from the 1960 s through the 2000 s, were presented to participants in two experiments to study the detail and contents of musical memory. For 400 ms clips, participants identified both artist and title on more than 25% of the trials.very accurate confidence ratings showed that this knowledge was recalled consciously. Performance was somewhat higher if the clip contained a word or partword from the title. Even when a clip was not identified, it conveyed information about emotional content, style and, to some extent, decade of release. Performance on song identification was markedly lower for 300 ms clips, although participants still gave consistent emotion and style judgments, and fairly accurate judgments of decade of release. The decade of release had no effect on identification, emotion consistency, or style consistency. However, older songs were preferred, suggesting that the availability of recorded music alters the pattern of preferences previously assumed to be established during adolescence and early adulthood. Taken together, the results point to extraordinary abilities to identify music based on highly reduced information. Received August 3, 2009, accepted January 3, 2010. Key words: music memory, meta-memory, popular music, emotion, style T HE CAR RADIO SCANS FROM STATION TO STATION and you recognize a song from decades ago. The title, artist, and lyrics flood into consciousness; perhaps the album cover, the appearance of the artist, or a personal anecdote. Or, if you don t recognize the song, you might immediately recall its era, musical genre, emotional content, how you danced to it, or its social significance. How detailed are these memories? What attributes of songs can be recalled? How is musical memory organized and which attributes are recalled even if the song is not recognized? And, what is the relationship between musical preferences and other aspects of musical memory? These questions were addressed in two experiments by presenting listeners with short (300 and 400 ms) clips from popular songs. In order to study effects of release date, the songs were taken from the 1960 s through the 2000 s. Listeners were asked to name the artist and title and indicate their confidence in the identifications. They were also asked about the decade of release, and judged the emotional content and style of the songs. After this, listeners were presented with long (15 s) clips for ratings of recognition and liking. They also judged these long clips for emotional content and style, which can be compared with those responses for the short clips as a way of assessing emotion and style consistency. These questions about musical memory seem obvious, but surprisingly little research has been done on them despite the remarkable expansion of psychological research on music in the last few decades. Perhaps this is because the primary emphasis has been on the cognition of more abstract properties of musical structure (particularly scale, harmony, and meter), with many experiments using materials composed for that purpose and/or presented in timbre-neutral sounds (such as piano). As an example, take a typical study on whether listeners have knowledge of the structure of the major scale. Melodies would be composed for the experiment that conform to the scale and the way tones are typically sequenced in a major key. On a trial one melody would be presented, and then a second melody with one or more tones changed to tones outside the scale. The task is to detect the changed tone(s) and if it is done correctly (as it would be for both musicians and nonmusicians) it is concluded that listeners know the structure of the scale. The findings of a large number of studies demonstrate that these abstract descriptors of musical structure are useful not only in the technical analysis of music, but that they also function as cognitive frameworks for perceiving, remembering, and performing music. Thus, they have cognitive reality. However, these studies do not bear on the long-term effects of sustained exposure to recorded music, which offers a unique opportunity to study human memory. There were three starting points for this study. The first is the concept of thin slices made well known by Gladwell s (2005) Blink, from which the title derives. Gladwell describes interesting cases in which, despite very reduced information, experts are able to do such things as judge authenticity of art, predict double faults in tennis, and Music Perception VOLUME 27, ISSUE 5, PP. 337 354,ISSN 0730-7829, ELECTRONIC ISSN 1533-8312 2010 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA. ALL RIGHTS RESERVED. PLEASE DIRECT ALL REQUESTS FOR PERMISSION TO PHOTOCOPY OR REPRODUCE ARTICLE CONTENT THROUGH THE UNIVERSITY OF CALIFORNIA PRESS S RIGHTS AND PERMISSIONS WEBSITE, HTTP://WWW.UCPRESSJOURNALS.COM/REPRINTINFO.ASP. DOI:10.1525/MP.2010.27.5.337 This content downloaded from 146.50.156.74 on Thu, 9 May 2013 08:01:26 AM All use subject to JSTOR Terms and Conditions Carol Krumhansl 2010 Plink: Thin Slices of Music

Chorusness J. Van Balen, J. A. Burgoyne, et al. 2013 An Analysis of Chorus Features in Popular Song

Earworms 3000 participants (UK) Psychology of Aesthetics, Creativity, and the Arts 2016 American Psychological Association 2016, Vol. 10, No. 4, 000 1931-3896/16/$12.00 http://dx.doi.org/10.1037/aca0000090 Dissecting an Earworm: Melodic Features and Song Popularity Predict Involuntary Musical Imagery Kelly Jakubowski Goldsmiths, University of London Sebastian Finkel University of Tübingen popularity recency melodic contour tempo (faster) This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Lauren Stewart Goldsmiths, University of London and Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Denmark Daniel Müllensiefen Goldsmiths, University of London Involuntary musical imagery (INMI or earworms ) the spontaneous recall and repeating of a tune in one s mind can be attributed to a wide range of triggers, including memory associations and recent musical exposure. The present study examined whether a song s popularity and melodic features might also help to explain whether it becomes INMI, using a dataset of tunes that were named as INMI by 3,000 survey participants. It was found that songs that had achieved greater success and more recent runs in the U.K. music charts were reported more frequently as INMI. A set of 100 of these frequently named INMI tunes was then matched to 100 tunes never named as INMI by the survey participants, in terms of popularity and song style. These 2 groups of tunes were compared using 83 statistical summary and corpus-based melodic features and automated classification techniques. INMI tunes were found to have more common global melodic contours and less common average gradients between melodic turning points than non-inmi tunes, in relation to a large pop music corpus. INMI tunes also displayed faster average tempi than non-inmi tunes. Results are discussed in relation to literature on INMI, musical memory, and melodic catchiness. Keywords: involuntary musical imagery, earworms, melodic memory, automatic music analysis, involuntary memory Why do certain songs always seem to get stuck in our heads? Involuntary musical imagery (INMI, also known as earworms ) is the experience of a tune being spontaneously recalled and repeated within the mind. A growing body of literature has described the phenomenology of the INMI experience (Brown, 2006; Williamson & Jilka, 2013), explored the circumstances under which INMI is likely to occur (Floridou & Müllensiefen, 2015; Hemming, 2009; Liikkanen, 2012a; Williamson et al., 2012) and investigated traits that predispose an individual toward experiencing INMI Kelly Jakubowski, Department of Psychology, Goldsmiths, University of London; Sebastian Finkel, Department of Medical Psychology and Behavioral Neurobiology, University of Tübingen; Lauren Stewart, Department of Psychology, Goldsmiths, University of London, and Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Denmark; Daniel Müllensiefen, Department of Psychology, Goldsmiths, University of London. This study was funded by a grant from the Leverhulme Trust, reference RPG-297, awarded to Lauren Stewart. Correspondence concerning this article should be addressed to Kelly Jakubowski, who is now at Department of Music, Durham University, Palace Green, Durham DH1 3RL, United Kingdom. E-mail: kelly.jakubowski@durham.ac.uk 1 (Beaman & Williams, 2013; Beaty et al., 2013; Floridou, Williamson, & Müllensiefen, 2012; Müllensiefen, Jones, Jilka, Stewart, & Williamson, 2014). In general, it has been found that INMI is a fairly common, everyday experience and many different situational factors can trigger many different types of music to become INMI (Beaman & Williams, 2010; Halpern & Bartlett, 2011; Hyman et al., 2013; Liikkanen, 2012a; Williamson et al., 2012). However, the initial question posed in this article of why certain songs might get stuck in our heads over other songs is still not well understood. The reason this question is so difficult to answer may reside with the fact that the likelihood of a tune becoming INMI is potentially influenced by a wide array of both intramusical (e.g., musical features and lyrics of a song) and extramusical factors (e.g., radio play, context in which it appears as INMI, previous personal associations with a song, and the individual cognitive availability of a song). The present research examines some of these previously unaddressed factors by examining the musical features and popularity (e.g., chart position, recency of being featured in the charts) of songs frequently reported as INMI. Related Previous Research on INMI Several researchers have examined extramusical features that increase the likelihood that a song will become INMI. Lab-based Kelly Jakubowski et al. 2016 Dissecting an Earworm

What is a hook?

What makes a hook? Mixing? Stereo balance? Melody? Tempo? Rhythm? Harmony? Sound effects? Improvisation? Lyrics? Studio editing? Distortion? Instrumentation? Dynamics? Gary Burns 1987 A Typology of Hooks in Popular Records

Recognition Song and segment IDs Singalong Verification Stimulus (correct/offset) Forced binary response Forced binary response Response time (< 15 s) Response time (unlimited)

CORRECT ANSWER Yes No PLAYER ANSWER Yes No 41% 9% 22% 28% 63% 37% 50% 50%

Measuring Catchiness

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

b Y ξ ~ N(v, 0.44) ξ + ~ N(v +, 0.43) A Y = 1.0 A N = 1.0 t 0 ~ N(0.16, 0.07) t Time (s) ξ 0 ~ N( v 0, 0.35) b N Information conservatism = ½ [(b Y A Y ) + (b N A N )] ~ Γ(22.16, 7.64) : μ = 2.90, σ = 0.68 optimism = (b N A N ) [(b Y A Y ) + (b N A N )] ~ Β(15.76, 15.15) : μ = 0.51, σ = 0.09 Linear Ballistic Accumulators (Brown & Heathcote 2008)

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

Top 10 Artist Title Year Rec. Time (s) 1 Spice Girls Wannabe 1996 1.78 2 Aretha Franklin Think 1968 1.85 3 Queen We Will Rock You 1977 1.85 4 Christina Aguilera Beautiful 2002 2.00 5 Amy MacDonald This Is the Life 2007 2.01 6 The Police Message in a Bottle 1979 2.08 7 Bon Jovi It s My Life 2000 2.16 8 Bee Gees Stayin Alive 1977 2.16 9 ABBA Dancing Queen 1976 2.17 10 4 Non Blondes What s Up 1993 2.20

B REAK

Predicting Hooks

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Hook Predictors Factor % Drift-Rate Increase 99.5% CI Melodic Repetition 12.0 [5.4, 19.0] Vocal Prominence 8.0 [0.8, 15.8] Melodic Conventionality 7.8 [1.3, 14.7] Melodic Range Conventionality 6.8 [0.9, 13.0] R 2 marginal =.10 R 2 conditional =.47 J. Van Balen, J. A. Burgoyne, et al. 2015 Corpus Analysis Tools for Hook Discovery

Model: Audio Features Feature Coefficient 95% CI Vocal Prominence 0.14 [0.10, 0.18] Timbral Conventionality 0.09 [0.05, 0.13] Melodic Conventionality 0.06 [0.02, 0.11] M/H Entropy Conventionality 0.06 [0.02, 0.10] Sharpness Conventionality 0.05 [0.02, 0.09] Harmonic Conventionality 0.05 [0.01, 0.10] Timbral Recurrence 0.05 [0.02, 0.08] Mel. Range Conventionality 0.05 [0.01, 0.08] R 2 marginal =.10 R 2 conditional =.47

Predictions: Eurovision 2016 Country Score Vocal Tim. Mel. MHE Sharp. Harm. TR Range 1 ESP 10.0 3.1 0.2 0.7 1.1 0.2 0.7 0.2 1.6 2 GBR 10.0 3.4 1.4 0.1 1.0 0.5 0.1 1.8 0.3 3 SWE 9.8 1.8 0.9 0.3 0.4 0.3 0.3 1.0 0.3 4 LTU 9.8 2.7 0.4 0.3 0.5 0.4 0.3 0.2 0.1 5 DEU 9.6 3.4 0.4 0.3 0.1 0.0 0.3 0.2 0.1 6 AUS 9.5 1.4 0.1 1.3 2.6 1.3 1.3 0.8 0.5 7 AUT 9.5 2.7 1.1 0.8 0.6 0.3 0.8 0.3 0.4 8 FIN 9.4 2.3 0.4 1.8 0.4 0.2 1.8 0.1 1.1 9 CHE 9.4 2.4 0.7 0.9 1.1 0.2 0.9 0.8 1.2 10 AZE 9.3 2.9 0.5 0.3 1.1 0.2 0.3 0.4 0.1 12 NLD 9.1 1.5 0.4 0.6 1.2 0.4 0.6 0.7 0.7 39 HUN 7.5 1.6 0.7 0.1 0.9 0.3 0.1 0.9 0.4 40 MNE 7.1 0.6 0.0 0.8 0.3 2.5 0.8 0.4 0.7 41 ISL 6.9 0.6 0.6 0.7 1.7 0.5 0.7 0.6 0.4 42 GEO 6.8 0.3 1.2 0.3 0.0 0.1 0.3 0.0 1.6 43 ARM 6.5 0.0 0.5 0.4 0.2 0.4 0.4 0.5 1.5

Model: Symbolic Features Feature Coefficient 95% CI Melodic Repetitivity 0.12 [0.06, 0.19] Melodic Conventionality 0.07 [0.01, 0.13] R 2 marginal =.07 R 2 conditional =.47

Predictions: Nederlandse Liederenbank Melody Score Repetitivity Conventionality 1 NLB152784_01 10.0 7.1 0.1 2 NLB075307_03 9.8 7.2 0.5 3 NLB073393_01 8.7 6.2 0.5 4 NLB070078_01 8.0 5.4 0.2 5 NLB076495_01 7.6 5.6 1.2 6 NLB075158_01 7.5 4.8 0.3 7 NLB072500_01 7.2 4.5 0.2 8 NLB070535_01 7.2 4.5 0.3 9 NLB073939_01 7.1 4.4 0.3 10 NLB073269_02 7.1 4.2 0.0 180 NLB075325_02 4.8 1.1 0.1 356 NLB074182_01 3.7 0.8 0.4 357 NLB073822_01 3.6 0.7 0.9 358 NLB072154_01 3.6 1.0 0.3 359 NLB071957_03 3.6 1.0 0.5 360 NLB074603_01 3.5 1.6 0.0

Pubquizteam

A Diva Lover Factor b SE Intensity 0.26 0.07 Recurrence 0.15 0.07 Tonal Conventionality 0.15 0.06 I. Korsmit, J. A. Burgoyne, et al. 2017 If You Wanna Be My Lover

Age Balance Factor b SE Rhythmic Irregularity 0.30 0.09 Rhythmic Conventionality 0.20 0.08 Event Sparsity 0.19 0.08 I. Korsmit, J. A. Burgoyne, et al. 2017 If You Wanna Be My Lover

Hip-Hop Fanatic Factor b SE Melodic Complexity 0.21 0.06 Rhythmic Conventionality 0.13 0.06 Harmonic Complexity 0.11 0.05 I. Korsmit, J. A. Burgoyne, et al. 2017 If You Wanna Be My Lover

Ketchup? Factor b SE Intensity 0.25 0.22 Recurrence 0.21 0.19 I. Korsmit, J. A. Burgoyne, et al. 2017 If You Wanna Be My Lover

Summary

Summary Long-term musical salience What are the musical characteristics we carry into old age? How do we measure it? Drift rates, or rates of information accumulation in the brain.

Summary What is a hook? Seems to be quite literally a catchy tune. How do listeners differ? Divas, generations, genres and ketchup?

WWW.HOOKEDONMUSIC.ORG.UK

References Brown, Scott & Andrew Heathcote. 2008. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology 57 (3): 153 78. doi:10.1016/j.cogpsych.2007.12.002 Burgoyne, John Ashley, Dimitrios Bountouridis, Jan Van Balen & Henkjan J. Honing. 2013. Hooked: A game for discovering what makes music catchy. In Proceedings of the 14th International Conference on Music Information Retrieval, edited by Alceu de Souza Britto, Jr., Fabien Gouyon & Simon Dixon, pp. 245 50. Curitiba, Brazil. Burns, Gary. 1987. A typology of Hooks in popular records. Popular Music 6 (1): 1 20. http://www.jstor.org/ stable/853162 Krumhansl, Carol L. & Justin Adam Zupnick. 2013. Cascading reminiscence bumps in popular music. Psychological Science 24 (10): 2057 68. doi:10.1177/0956797613486486 Krumhansl, Carol L. 2010. Plink: Thin slices of music. Music Perception 27 (5): 337 54. doi:10.1525/mp.2010. 27.5.337 Müllensiefen, Daniel & Andrea R. Halpern. 2014. The role of features and context in recognition of novel melodies. Music Perception 31 (5): 418 35. doi:10.1525/mp.2014.31.5.418 Van Balen, Jan, John Ashley Burgoyne, Dimitrios Bountouridis, Daniel Müllensiefen & Remco C. Veltkamp. 2015. Corpus analysis tools for computational hook discovery. In Proceedings of the 16th International Society for Music Information Retrieval Conference, edited by Meinard Müller & Frans Wiering, pp. 227 33. Málaga, Spain. http://ismir2015.uma.es/articles/ 148_Paper.pdf