SocietyforMusicPerceptionandCognition August11 14,2011 EastmanSchoolofMusic oftheuniversityofrochester Rochester,NY Welcome DearSMPC2011attendees, Itismygreatpleasuretowelcomeyoutothe2011meetingoftheSocietyforMusicPerceptionand Cognition.ItisagreathonorforEastmantohostthisimportantgatheringofresearchersand students,fromallovernorthamericaandbeyond. AtEastman,wetakegreatprideintheimportancethatweaccordtotheresearchaspectsofamusical education.werecognizethatmusicperception/cognitionisanincreasinglyimportantpartofmusical scholarship andithasbecomeapriorityforus,bothateastmanandattheuniversityofrochesteras awhole.thisisreflected,forexample,inourstewardshipoftheesm/ur/cornellmusiccognition Symposium,inthedevelopmentofseveralnewcoursesdevotedtoaspectsofmusic perception/cognition,intheallocationofspaceandresourcesforamusiccognitionlab,andinthe researchactivitiesofnumerousfacultyandstudents. Wearethrilled,also,thatthenewEastmanEastWingoftheschoolwascompletedintimetoserveas thesmpc2011conferencesite.wetrustyouwillenjoytheseexceptionalfacilities,andwilltake pleasureinthesuperbmusicalentertainmentprovidedbyeastmanstudentsduringyourstay. WelcometoRochester,welcometoEastman,welcometoSMPC2011 we'redelightedtohaveyou here! Sincerely, DouglasLowry Dean EastmanSchoolofMusic
SMPC 2011 Program and abstracts, Page: 2 Acknowledgements Monetary support graciously provided by: Eastman School of Music and Dean Douglas Lowry Eastman Multimedia, Cognition, and Research Computing Lab University of Rochester Committee for Interdisciplinary Studies: UCIS Music Cognition Cluster UR Department of Brain and Cognitive Sciences UR Center for Language Sciences UR Program on Plasticity, Development, and Cognitive Neuroscience UR Humanities Project National Association of Music Merchants (NAMM) Additional and in kind support provided by: Eastman Concert Office (Andrew Green, Serin Kim Hong) Eastman Technology and Media Production Office (Helen Smith, Dominick Fraczek) Eastman Departments of Music Theory and Organ & Historical Instruments Christ Church Episcopal and the Eastman Rochester Organ Initiative UR Conference and Events Office (Celia Palmer, Denise Soudan) Student workers from the Eastman School of Music and University of Rochester Brain & Cognitive Sciences Department SMPC 2011 Programming Committee Erin Hannon, University of Nevada at Las Vegas Carol Krumhansl, Cornell University Justin London, Carleton College Elizabeth Margulis, University of Arkansas Peter Martens, Texas Tech University J. Devin McAuley, Michigan State University Peter Pfordresher (chair), University at Buffalo SUNY Frank Russo, Ryerson University Michael Schutz, McMaster University SMPC 2011 Local Events Committee Elizabeth West Marvin, Eastman School of Music David Temperley, Eastman School of Music SMPC Executive Board Tonya Bergeson (Secretary), Indiana University Andrea Halpern (President Elect), Bucknell University Petr Janata, University of California at Davis Scott Lipsomb (Treasurer), University of Minnesota Elizabeth Margulis, University of Arkansas Aniruddh Patel (President), The Neurosciences Institute Frank Russo, Ryerson University David Temperley, Eastman School of Music Finn UpHam (Student Member), Northwestern University About our Keynote Speaker: Nina Kraus Professor Kraus is the Hugh Knowles Professor (Communication Sciences; Neurobiology & Physiology; Otolaryngology) at Northwestern University, where she directs the Auditory Neuroscience Laboratory. Dr. Kraus investigates biological bases of speech and music. She investigates learning associated brain plasticity throughout the lifetime in diverse populations normal, expert (musicians), and clinical (dyslexia; autism; hearing loss) and also in animal models. In addition to being a pioneering thinker who bridges multiple disciplines (aging, development, literacy, music, and learning), Dr. Kraus is a technological innovator who roots her research in translational science. For more on Professor Kraus's research, see: http://www.brainvolts.northwestern.edu
Programoverview SMPC2011Programandabstracts,Page: 3 Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow (numbersalsodisplayednexttoeachabstracttitle).sessionchairslistedaftersessiontitlesbylastname. Start End Thursday,11August,2011 8:00 17:00 8:00 9:00 9:00 9:20 9:20 9:40 9:40 10:00 10:00 10:20 Absolutepitch(Lipscomb) HatchRecitalHall Sharma&Levitin(1) Loui,Zamm,&Schlaug(2) Marvin&Newport(3) Weisman,Balkwill,Hoeschele,Mosicki, &Sturdy(4) Registration (WolkAtrium) ContinentalBreakfast (WolkAtrium) Evolution(Martens) ESMRoom120 Chan,McGarry,Corpuz&Russo(5) Parncutt(6) Perlovsky(7) Savage,Rzeszutek,Grauer,Wang, Trejaut,Lin,&Brown(8) 10:20 10:40 Break 10:40 11:00 Emotion1(Margulis) HatchRecitalHall Trenck,Martens,&Larsen(9) Cross modaleffects(repp) ESMRoom120 Livingstone,Palmer,Wanderley& 11:00 11:20 11:20 11:40 11:40 12:00 McGarry&Russo(10) Temperley&Tan(11) Martens&Larsen(12) Thompson(13) Marin,Gingras&Bhattacharya(14) Hedger,Nusbaum,&Hoeckner(15) Krumhansl&Huang(16) 12:00 14:00 LunchBreak 14:00 14:20 14:40 15:00 15:20 14:20 14:40 15:00 15:20 15:40 Development(Cuddy) HatchRecitalHall Adachi(17) Trehub,Zhou,Plantinga,&Adachi(18) Patel,Iversen,Brandon,&Saffran(19) Corrigall&Trainor(20) Filippa&Gratier(21) Timbre(Hasegawa) ESMRoom120 Chiasson,Traube,Lagarrigue,Smith,& McAdams(22) Tardieu&McAdams(23) Kendall&Vassilakis(24) Lembke&McAdams(25) Paul&Schutz(26) 15:40 16:00 Break Plenarysession 16:00 18:30 President saddress,smpcachievementaward,andkeynotelecture KeynoteLecture:CognitiveFactorsShapeBrainNetworksforAuditorySkills NinaKraus (HatchRecitalHall;additionalseatinginESMRoom120) 18:30 OpeningReception Co sponsoredbytheeastmanschoolofmusicand NationalAssociationofMusicMerchants(NAMM) (SproullAtriumatMillerCenter) 20:00 OptionalEveningConcert: MuPhiEpsilonInternationalCompetitionConcert Locationdependsuponwhetherwinnersincludeanorganist: KilbournHallorChristChurchEpiscopal(acrossEastAvenue) Suggested$10donationatthedoor Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00 9:00and13:00 14:00.
SMPC2011Programandabstracts,Page: 4 Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow (numbersalsodisplayednexttoeachabstracttitle).sessionchairslistedaftersessiontitlesbylastname. Start End Friday,12August,2011 7:30 9:00 8:00 17:00 8:00 9:00 9:00 9:20 9:20 9:40 9:40 10:00 10:00 10:20 Imagery/Individualdifferences (London) HatchRecitalHall Eitan&Granot(27) Benadon&Winkler(28) Aufegger&Vitouch(29) McAuley,Henry,&Wedd(30) SpecialEvent:StudentBreakfast (CominskyPromenade,2 nd Floor,ESMMainBuilding) Co sponsoredbynammandsmpc Registration (WolkAtrium) ContinentalBreakfast(allexceptstudentattendees) (WolkAtrium) Auditorysystem(McAdams) ESMRoom120 Schramm&Luebke(31) Saindon,Trehub,&Schellenberg(32) Bergeson&Peterson(33) Cariani(34) 10:20 10:40 Break Symposium:Musicalmodelsof speechrhythmandmelody HatchRecitalHall Physiologicalresponses(Halpern) ESMRoom120 10:40 11:00 11:00 11:20 11:20 11:40 11:40 12:00 Brown,Chow,Weishaar&Milko(35) Chow,Poon,&Brown(36) Dilley(37) Port(38) Mitchell,Mogil,Koulis,&Levitin(39) Ladinig,Huron,Horn,&Brooks(40) Mitchell,Paisley,&Levitin(41) Upham&McAdams(42) 12:00 14:00 LunchBreak(allexceptSMPCexecutiveboard) ExecutiveBoardMeeting (RanletLounge,2 nd Floor,EastmanTheatre) 14:00 14:20 14:20 14:40 14:40 15:00 15:00 15:20 15:20 15:40 Cross culturaleffects(tan) HatchRecitalHall Hegde,Ramanjam,&Panikar(43) Athanasopoulos,Moran,&Frith(44) Kalender,Trehub,&Schellenberg(45) Beckett(46) Vempala&Russo(47) Neuroscience(Large) ESMRoom120 Herholz,Halpern,&Zatorre(48) Norman Haignere,McDermott, Fedorenko,&Kanwisher(49) Moussard,Bigand,&Peretz(50) Butler&Trainor(51) Iversen&Patel(52) 15:40 16:00 Break Postersession1 16:00 18:00 AbstractsA.1 A.43 (EastmanEastWing415) 18:00 OptionalEvent:EastEndFestival Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00 9:00and13:00 14:00.
SMPC2011Programandabstracts,Page: 5 Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow (numbersalsodisplayednexttoeachabstracttitle).sessionchairslistedaftersessiontitlesbylastname. Start End Saturday,13,August,2011 8:00 17:00 8:00 9:00 9:00 9:20 9:40 10:00 9:20 9:40 10:00 10:20 Rhythm1(MacKenzie) HatchRecitalHall Manning&Schutz(53) McAuley,Henry,Rajarajan,&Nave(54) Cogsdill&London(55) Poudrier&Repp(56) Registration (WolkAtrium) ContinentalBreakfast (WolkAtrium) Cognition1(Bergeson) ESMRoom120 Schachner&Carey(57) Vuvan&Schmuckler(58) Koreimann&Vitouch(59) Houlihan&Levitin(60) 10:20 10:40 Break Rhythm2(McAuley) HatchRecitalHall Tonalityandmelody(Cohen) ESMRoom120 10:40 11:00 Albin,Lee,&Chordia(65) Sears,Caplin&McAdams(69) 11:00 11:20 11:20 11:40 11:40 12:00 Riggle(66) London&Cogsdill(67) Ammirante&Thompson(68) Brown(70) Parncutt&Sapp(71) Miller,Wild&McAdams(72) Metatheoreticalapproaches (Zibikowski) KilbournHall Narmour(61) Tirovolas&Levitin(62) Tan(63) Narmour(64) Computationalmodeling(Bartlette) KilbournHall Temperley(73) Temperley&deClerq(74) Large&Almonte(75) Mavromatis(76) 12:00 13:00 Businessmeeting (HatchRecitalHall) 13:00 14:00 LunchBreak 14:00 14:20 14:20 14:40 14:40 15:00 15:00 15:20 15:20 15:40 Emotion2(Narmour) HatchRecitalHall Margulis(77) Plazak(78) Huron&Horn(79) Chordia&Sastry(80) Thompson,Marin,&Stewart(81) Cognition2(Parncutt) ESMRoom120 Rosenthal,Quam,&Hannon(82) Ashley(83) Creel(84) Mavromatis&Farbood(85) Anderson,Duane,&Ashley(86) Musicandlanguage(VanHandel) KilbournHall Matsumoto&Marcum(87) Liu,Jiang,Thompson,Xu,Yang,& Stewart(88) Temperley&Temperley(89) Sullivan&Russo(90) Cox(91) 15:40 16:00 Break Postersession2 16:00 17:30 AbstractsB.1 B.34 (EastmanEastWing415) Lecture RecitalbyRandallHarlow 17:45 18:45 "Acousticsandpsychohapticsinapipeorganreconstruction:Eastman'sCraighead SaundersOrgan" (ChristChurchEpiscopal) 19:00 Banquet (RochesterClubBallroom) Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletimes:8:00 9:00and13:00 14:00.
SMPC2011Programandabstracts,Page: 6 Talksarelistedbyauthornames.Parenthesesshowabstractnumbersusedinthepagesthatfollow (numbersalsodisplayednexttoeachabstracttitle).sessionchairslistedaftersessiontitlesbylastname. Start End Sunday,14August,2011 8:00 9:00 9:00 9:20 9:20 9:40 9:40 10:00 10:00 10:20 Emotion3(Thompson) HatchRecitalHall Egermann,Pearce,Wiggins&McAdams (92) LeGroux,Fabra,&Verschure(93) Albrecht,Huron,&Morrow,(94) Russo&Sandstrom(95) ContinentalBreakfast (WolkAuditorium) Performance1(Ashley) ESMRoom120 Lisboa,Demos,Chaffin,&Begosh(96) Brown&Palmer(97) Gross(98) Devaney,Wild,Schurbert,&Fujinaga (99) 10:20 10:40 Break Analyticalapproaches(Huron) HatchRecitalHall Performance2(Beckett) ESMRoom120 10:40 11:00 Aziz(100) Kruger,McLean,&Kruger(104) 11:00 11:20 11:20 11:40 11:40 12:00 Liu,Sun,&Chordia(101) Sigler&Handelman(102) Samplaski(103) Curtis,Hegde,&Bharucha(105) Poon&Schutz(106) Pfordresher,Tilton,Mantell,&Brown (107) Presenters: Pleasetesttheaudio/visualsetupintheroomwhereyourtalkwillbeheld. Availabletime:8:00 9:00.
SMPC2011Programandabstracts,Page: 7 Titlesandabstractsfortalks 1EffectsofMusicalInstrumentonAbsolutePitchAbility VivekV.Sharma*&DanielJ.Levitin McGillUniversity,Montréal,Québec,Canada *=Correspondingauthor,vivek.sharma@mail.mcgill.ca PersonswhopossessAbsolutePitch(AP),theabilitytonamemusicaltoneswithoutanexternalreference,oftenreport trainingonamusicalinstrumentfromayoungage(sergeant,1969).tounderstandtheacquisitionprocessofap,itwouldbe usefultoknowhowthemusicalinstrumentsplayedbyappossessorsinfluencethedevelopmentofpitchtemplatesintheir long termmemory.wehypothesizethatplayersoffixed pitchedinstrumentsidentifytonesfasterandmoreaccuratelythan playersofvariable pitchedinstrumentsbecauseoftheformergroup'sgreaterexposuretoprecisepitchvalues,andthe consequentpreferentialtuningofauditorysystemneuronstothosevalues.totestourhypothesis,weexaminedhowap musicianslabeledintuneanddetunedpitches.wetested10pianistsand10violinists.tonesof3differenttimbreswere presented:piano,violinandsinusoidal.theirfrequenciesformedacontinuumofpitchclassesthatwereindividually separatedby20 intervalsandrangedfromc4toc5,inclusive,wherea4=440hz.dependentvariableswerethepercentages ofcorrectlylabeledtonesandreactiontimes.theparticipantsalsoratedthegoodness of fitofeachtoneusingacontinuous scale.becausethepianoisfixed pitched,itmayrepetitivelyreinforcethecodificationofpitchtoverballabelswithinthelongtermmemorymoreeffectivelythanthevariable pitchedviolin.wesuspectthatthestudysupportsthehypothesizedeffectsof tonemappingandmusicaltrainingonapacquisition,perceptionandmemory. 2EmotionalJudgmentinAbsolutePitch PsycheLoui*,AnnaZamm,MatthewSachs,andGottfriedSchlaug BethIsraelDeaconessMedicalCenterandHarvardMedicalSchool,Boston,MA,USA *=Correspondingauthor,ploui@bidmc.harvard.edu AbsolutePitch(AP)isauniquephenomenoncharacterizedbytheabilitytonamethepitchclassofanynotewithouta reference.inrecentyears,aphasbecomeamodelforexploringnature nurtureinteractions.whilepastresearchfocusedon differencesbetweenapsandcontrolsindomainssuchaspitchnaming,littleisknownabouthowappossessorstackleother musicaltasks.inthisstudyweaskedwhetherappossessorsrecruitdifferentbrainresourcesfromcontrolsduringataskin whichapsareanecdotallysimilartocontrols:thetaskofemotionaljudgment.functionalmriwasacquiredfrom15apsand 15controls(matchedinage,sex,ethnicity,andmusicaltraining)astheylistenedtomusicalsoundclipsandratedthearousal levelofeachcliponthescaleof1(low arousal)to4(high arousal),relativetoasilentrestcondition.additionally,weacquired DiffusionTensorImaging(DTI)datatoinvestigatewhitematterdifferencesbetweenAPpossessorsandcontrols.Behavioral resultsshowednosignificantdifferencebetweenapsandcontrols.however,asecond levelcontrastbetweenmusicandrest conditionsshowedthatapsrecruitedmoreneuralactivityinleftheschl sgyrus(primaryauditorycortex).anothersecondlevelcontrastbetweenhigh arousalandlow arousalmusicrevealedincreasedactivityinapsintheleftposteriorsuperior temporalgyrus(secondaryauditorycortex).dtishowedthatapshadlargerconnectionsbetweentheleftposteriorsuperior temporalgyrusandtheleftposteriormiddletemporalgyrus,regionsthoughttobeinvolvedinsoundperceptionand categorizationrespectively.despiteabehavioraltaskdesignedtominimizedifferencesbetweenapsandcontrols,we observedsignificantbetween groupdifferencesinbrainactivityandconnectivity.resultssuggestthatappossessors obligatorilyrecruitextraneuralresourcesforperceivingandcategorizingmusicalsounds.
SMPC2011Programandabstracts,Page: 8 3TheAbsolutePitchContinuum:EvidenceofIncipientAPinMusicalAmateurs ElizabethW.Marvin(1)*,ElissaL.Newport(2) (1)EastmanSchooloftheUniversityorRochester,Rochester,NYUSA(2)UniversityofRochester,Rochester,NYUSA *=Correspondingauthor,bmarvin@esm.rochester.edu Musicianstypicallyviewabsolutepitch(AP)asanall or nothingproposition.recentresearchrevealsadifferentpicture, however,suggestingthatapabilitiesexistalongacontinuumandthatmanylisteners,somewithoutextensivemusical training,encodepitchinformationaccuratelyinmemory(e.g.,bermudez&zatorre,2009;ross,olsen,marks&gore,2004; Schellenberg&Trehub,2003).Thispaperreportsnewdatathatsupportthecontinuumtheory.Threegroupsofparticipants (n=254) professionalmusictheorists,freshmanmusicmajors,andliberal artsstudents tookapitch namingtestandan implicitlearningtestrequiringthemtodiscriminatebetweenpitchpatternslearnedduringafamiliarizationphaseandtheir transpositions(seealsosaffran&griepentrog,2001).inaprevioustestofapandnon APmusicians,scoresonthenaming andimplicitlearningtestswerehighlycorrelated(marvin&newport,2008).inthecurrentwork,thosewhoshowedapon pitchnaming(n=31)scoredsignificantlyhigherthanthosewhodidnot,verifyingthattheimplicitlearningtestdoesmeasure pitchmemory,withoutrequiringpitchlabels.interestingly,examinationofindividualscoresontheimplicitlearningtask revealed12 incipientap participants(somefromeachgroup),whoscored88 100%correctontheimplicitlearningtest(as highasapparticipants),butaveragedonly34%correctonpitchnaming.thisprovidestheunusualopportunitytoexamine thepitchdiscriminationandmemoryabilitiesofapopulationoflistenerswhoappeartoexhibitstrongapbutwithout extensivemusicaltrainingorpitchlabelingstrategiesaspartofap.on goingresearchteststheselistenersformicrotonal pitchdiscrimination,nonmusicalmemory(digitspan),andmusicalmemory(amusicaldigit spananalog).preliminarydata showcomparablescoresforapmusiciansandforincipient APlisteners,eventhosewhoaremusicalamateurs.Thosewith highscoresontheimplicitlearningtaskdonotscoresignificantlyhigheronmemoryteststhancontrols,thoughtheyshow betterpitchdiscriminationinsomeregisters. 4IdentifyingAbsolutePitchPossessorsWithoutUsingANote NamingTask. RonaldWeisman*,Laura LeeBalkwill,MarisaHoeschele,MicheleMoscicki,andChristopherSturdy QueensUnversity,Kinston,Ontario,Canada *=Correspondingauthor,ronald.weisman@queensu.ca MostresearchersmeasureAPusingnote namingtasksthatpresumefluencywiththescalesofwesternmusic.ifnotenaming constitutestheonlymeasure,thenbyfiat,onlytrainedmusicianscanpossessap.herewereportonanaptestthatdoesnot requireanote namingresponse.theparticipantswere60musicians,whoself reportedap.ourpitchchromalabelingtask wasadaptedfromchallengingoperantgo,no godiscriminations(weisman,niegovan,williams,&sturdy,2004)usedtotest songbirds,rats,andhumanswithtonesmistunedtothemusicalscale.inourpitch labelingtask,wepresentedsinewavetones tunedtothe12 noteequal temperamentscale,inadiscriminationbetweenthefirstandsecond6notesinoctavesfour,five, andsix.resultswerevalidatedagainstathos,levinson,kisler,etal.'s(2007)sinewavenote namingtestofap.actualap possessors(n=15)beganmusictrainingearlierandhadmoremusictrainingthannonpossessors(n=45),but25 nonpossessorsmatchedtoappossessorsinexperiencehadnohigherapscoresthanothernonpossessors.hereforsimplicity wereportpercentcorrectscoresforthepitchlabelingtask,butd',a',andpercentcorrectmeasureswereallhighlycorrelated, rs>.90.overtrialsappossessorscametolabelthehalf octavemembershipofthe36toneswithm=90%accuracy; nonpossessorsscoredonlyslightlybetterthanchance,m=55%correct.mostimportant,thepitch labelingtasksuccessfully identifiedtheapstatusof58of60participantsonathosetal.'stest.infuturestudies,thepitch labelingtaskwillbeconverted toaweb basedprotocoltotestlargenumbersofnonmusicians.then,usingourlabelingtaskinconjunctionwithross's (2004)reproductiontest,wehopetoaccuratelyidentifynonmusicianAPpossessorsorwithenoughparticipantsfromseveral populationscastdoubtonthehypothesisthatnonmusicianscanpossessap.
SMPC2011Programandabstracts,Page: 9 5AnEmpiricalTestoftheHonestSignalHypothesis LisaChan(1)*,LucyM.McGarry(1),VanessaCorpuz(1),FrankA.Russo(1) (1)RyersonUniversity,Toronto,Canada *=Correspondingauthor,lisa.chan@psych.ryerson.ca Severaltheoristshaveproposedthatmusicmighthavefunctionedinourevolutionaryhistoryasanhonestsignal(Cross& Woodruff,2008;Levitin,2008;alsoseeDarwin,1872).Thetermhonestsignaloriginatesinethology,whereithasbeenused torefertoasignalthathasevolvedtobenefitthereceiveraswellasthesignaler(e.g.,thedartfrog advertises itschemical defensestopredatorswithconspicuousskincoloration).inthisstudyweassesswhethermusicmaybemore honest than speechwithregardtorevealingaperformer strue(experienced)emotions.performerswereinducedwithahappyorsad emotionusinganemotioninductionprocedureinvolvingmusicandguidedimagery.subjectiveevaluationofexperienced emotionsuggestedthattheinductionswerehighlyeffective.performersweresubsequentlyaskedtoperformsungorspoken phrasesthatwereintendedtoconveyeitherhappinessorsadness.theintendedemotioncouldthusbecharacterizedas congruentorincongruentwiththeperformer sinducedemotion.recordingsofperformanceswereevaluatedbyparticipants withregardtovalenceandbelievability.valenceratingsrevealedthatperformersweresuccessfulinexpressingtheintended emotionintheemotionallycongruentcondition(i.e.,highervalenceratingsforintendedhappythanforintendedsad)and unsuccessfulfortheemotionallyincongruentcondition(i.e.,intermediatevalenceratingsforintendedhappyandforintended sad).critically,songledtohigherbelievabilityratingsthanspeech,owedlargelytothehighbelievabilityofsongproduced withsadexpression.theseresultswillbediscussedinthecontextofthehonestsignalhypothesisandrecentevidencefor mimicryinperceptionofsungemotion. 6DefiningMusicasaStepTowardExplainingitsOrigin RichardParncutt* UniversityofGraz,Austria *=Correspondingauthor,richard.parncutt@uni graz.at Sincethebreakdownoftonality(WagnertoSchoenberg)andtheemergenceofethnomusicology,musicologistshavebeen reluctanttodefinemusic,sincedefinitionsalwaysdependonhistorical,cultural,andacademiccontext.butthesehistorical developmentsmerelyshowedthatmusicneednotbetonalandthatthedistinguishingfeaturesofwesternmusicshouldbe absentfromageneraldefinition.theyalsodrewattentiontothedifferentmeaningsof music anditstranslationsindifferent culturesandperiods.today stheoriesoftheorigin(s)ofmusicdifferinpartbecauseresearchersstillhavedifferentimplicit definitionsofmusic.theproblemcanbesolvedbyspecifyingexactlywhatmusicisassumedtobe whichincidentallyalso allows musicology tobedefined.adefinitionmightrunasfollows.bothmusicandlanguageareacoustic,meaningful, gestural,rhythmic/melodic,syntactic,social,emotional,andintentional;musicandlanguagedifferinthatmusicislesslexical, morerepetitive,morespiritual,lesssociallyessential,andmoreexpertisebased.ofcourseallthetermsintheselistsneedto beexplainedandifpossibleoperationalized,andindividualclaimssupported.giventhepaucityofreliableinformationabout thebehaviorofearlyhumansthatcouldhaveinfluencedmusic sdevelopment,weneedtoexplorenewapproachesto evaluatingtheoriesofitsorigin.oneapproachistoevaluatetheextenttowhicheachtheorycanparsimoniouslyaccountforor predictthelistedfeatures.anotheristoevaluatethequantityandqualityofrelevantempiricalstudiesthatareconsistentwith thespecificprocessespositedinthetheory.iwillpresentdetailsofthisnewsystematicapproachandbrieflyshowhowitcan beusedtoevaluatetheoriessuchasthosebasedonmateselection,socialcohesion,andmotherese.
SMPC2011Programandabstracts,Page: 10 7MusicalEmotions:Functions,Origins,Evolution LeonidPerlovsky* HarvardUniversity,Cambridge,andAirForceResearchLab.,HanscomAFB,MA,USA *=Correspondingauthor,Leonid.Perlovsky@hanscom.af.mil Musicseemsanenigma.Existingtheoriescannotexplainitscognitivefunctionsorevolutionaryorigins.Hereahypothesisis proposedbasedonsynthesisofcognitivescienceandmathematicalmodelsofthemind,whichdescribesafundamentalroleof musicinthefunctioningandevolutionofthemind,consciousness,andcultures.thetalkconsidersancienttheoriesofmusic aswellascontemporarytheoriesadvancedbyleadingauthorsinthisfield.thenitdiscussesahypothesisthatpromisesto unifythefieldandproposesatheoryofmusicaloriginbasedonafundamentalroleofmusicincognitionandevolutionof consciousnessandculture.thetalkconsidersasplitinthevocalizationsofproto humansintotwotypes:onelessemotional andmoreconcretely semantic,evolvingintolanguage,andtheotherpreservingemotionalconnectionsalongwithsemantic ambiguity,evolvingintomusic.theproposedhypothesisdepartsfromothertheoriesinconsideringspecificmechanismsof themind brain,whichrequiredevolutionofmusicparallelwithevolutionofculturesandlanguages.argumentsarereviewed thatevolutionoflanguagetowardthesemanticallypowerfultooloftodayrequiredemancipationfromemotional encumbrances.theopposite,nolesspowerfulmechanismsrequiredacompensatoryevolutionofmusictowardmore differentiatedandrefinedemotionality.theneedforrefinedmusicintheprocessofculturalevolutionisgroundedin fundamentalmechanismsofthemind.thisiswhytoday shumanmindandculturescannotexistwithouttoday smusic.the proposedhypothesisgivesabasisforfutureanalysisofwhydifferentevolutionarypathsoflanguageswereparalleledby differentevolutionarypathsofmusic.approachestowardexperimentalverificationofthishypothesisinpsychologicaland neuroimagingresearcharediscussed. 8MusicasaMarkerofHumanMigrations:AnAnalysisofSongStructurevs.Singing Style PatrickSavage(1)*,TomRzeszutek(1),VictorGrauer(2),Ying fenwang(3),jeantrejaut(4),marielin(4),stevenbrown(1) (1)DepartmentofPsychology,Neuroscience&Behaviour,McMasterUniversity,Hamilton,Canada,(2)Independentscholar,Pittsburgh,USA,(3)Graduate InstituteofMusicology,NationalTaiwanUniversity,Taipei,Taiwan,(4)TransfusionMedicineLaboratory,MackayMemorialHospital,Taipei,Taiwan *=Correspondingauthor,savagepe@mcmaster.ca The discovery that our genes trace the migration of all humans back to a single African mitochondrial Eve has had an enormous impact on our understanding of human pre history. Grauer (2006) has claimed that music, too, can trace prehistorichumanmigrations,butcriticsarguethatmusic stime depthistooshallow(i.e.,musicchangestoorapidlytopreserve ancient relationships). We predicted that if any musical features were to have the necessary time depth, they would be the structural features rather than the performance features of traditional group songs. To test this prediction, we used Cantometriccodingsof222traditionalgroupsongsfrom8Taiwaneseaboriginaltribestocreateseparatedistancematrices for music based on either song structure or singing style. Surprisingly, both distance matrices were significantly correlated (p<0.01) with genetic distances based on mitochondrial DNA a migration marker with well established time depth. However,inlinewithourprediction,structure(r 2 =0.27)accountedfortwiceasmuchvarianceasperformancestyle(r 2 =0.13). Independentcodingofthesesongsusinganewclassificationschemethatfocusesexclusivelyonstructuralfeaturesconfirmed the correlation with genes (r 2 =0.19). Further exploratory analyses of the different structural sub categories revealed that featuresrelatedtopitch(e.g.,intervalsize,scale)weremorestronglycorrelatedwithgenes(r 2 =0.24)thanthoserelatedto rhythm (r 2 =0.12), text (r 2 =0.05), texture (r 2 =0.08), or form (r 2 =0.13). These results suggest that, while song structure especiallypitch maybeastrongermigrationmarkerthansingingstyle,manytypesofmusicalfeaturesmayhavesufficient time depthtotrackpre historicpopulationmigrations.
SMPC2011Programandabstracts,Page: 11 9ThePowerofMusic:TheCompositionandPerceptionofEmotioninMusic MeganTrenck*,PeterMartens,&JeffLarsen TexasTechUniversity,Lubbock,TX,USA *=Correspondingauthor,metrenck@gmail.com MelodyhashadaprominentplaceinrecentstudiesontheemotionalcontentofmusicsuchasBrower(2002)andKrumhansl (2002).Further,CollierandHubbard(2001)claim thatemotionalvalencemaybebasedmoreonthehorizontalratherthan theverticalaspectofmusic. Toinvestigatesomespecificsofwhatmakesemotionsattributabletomelody,acombinationof undergraduateandgraduatemusicmajorsattexastechuniversitywereaskedtocomposeamelodydepictingeither happinessorsadness.norestrictionswereplacedontheuseoftimesignature,keysignature,ortempo,butmelodieswere restrictedtoonemonophoniclineofmusic.melodieswereanalyzedfortheirdistributionoffirst orderintervals(intervals betweenadjacentnotes),melodicspans(distanceamelodytravelsinonedirectionbeforeacontourchange,measuredin semitones),andadditionalwaysmelodiesbehaverelativetotheirevolvingpitchmean.thefindingscorroboratesomeofthe perceptualconclusionsofgabrielsson(2009)andcollierandhubbard(2001),whofoundthatnarrowrangesbringout sadnesswhereashappinessisderivedfromwidemelodicrangesandlargerleaps.next,aperceptualstudywasconductedto helpdeterminehowwellmelodiesportrayedtheintendedemotions.forty nineundergraduatemusicmajorsratedtheir perceptionsineachofthemelodiesoftwelvedifferentemotions,halfsadspectrumandhalfhappyspectrumemotions.as expected,melodiesdepictinghappinesswerecomposedinthemajormodeandmelodiesdepictingsadnesswerelargely composedintheminormode.ratingsofemotionsseemednotonlytobebasedonthemodeofthemelody,butalsoonthe notedensity,whichappearedtoamplifyordampeneffectsofmodeonperceivedemotions.confirmingtheresultsofthe perceptualstudy,thesemethodsofmelodicanalysissuggesthowcomposersmightattempttoportraydifferentemotions withinthemusicaldomainofmelody. 10TheEffectsofExpertiseonMovement mediatedemotionalprocessinginmusic LucyMcGarry(1)*,andFrankRusso(1) (1)RyersonUniversity,Toronto,Canada *=Correspondingauthor,lmcgarry@psych.ryerson.ca Manystudieshavedemonstratedthatmimicryofemotionalgesturesaidsintheirrecognition.Inthecontextofmusic,mimicry ofperformanceoccursautomaticallyandhasbeenhypothesizedtomediatemusicalunderstanding(livingstone,thompson& Russo,2009).Inthecurrentstudy,weexaminedwhetherexaggeratedmimicryoftheemotionalthemesinmusic,suchasthat whichoccursduringdance,enhancesunderstandingofemotionconveyedbymusicinasimilarwaytomotormimicryinsocial contexts.thirtydancersand33non dancersweretestedusingawithin subjectsdesign.participantslistenedtomusicalclips fromthebachcellosuites,selectedbasedonpilotratingstobehighinarousalandvalence(happy),orlowinarousaland valence(sad).duringmusiclistening,participantsfollowedrandomizedinstructionstomovehandsfreelytothemusic,sit still,ormoveinaconstrainedmanner.afterwards,allsongclipswereheardagainwhilephysiologicalresponsesandratings weretaken.resultsdemonstratedabeneficialeffectoffreemovementonsubsequentemotionalengagementwithmusicfor dancersonly.foreachmeasurementwecomputedapolarizationscorebycalculatingthedifferencebetweenresponsesto happy(higharousal,positivevalence)andsad(lowarousal,negativevalence)music.zygomatic(smiling)muscleactivation, skinconductancelevels,valenceandarousalratingsallshowedenhancedpolarizationinthefreemovementcondition.in addition,zygomaticactivitymediatedvalenceandarousalratingsindancers.non dancersdidnotdemonstratethese polarizations.ourresultssuggestthatmovementexpertslikedancersrelymoreonmovementtoprocessemotionalstimuliin music.futurestudiesshouldexaminewhetherthisisduetoapersonalitydifferencebetweendancersandnon dancers,oran expertiseeffect.
SMPC2011Programandabstracts,Page: 12 11TheEmotionalConnotationsofDiatonicModes DavidTemperley*,&DaphneTan EastmanSchoolofMusic,Rochester,NY,USA *=Correspondingauthor,dtemperley@esm.rochester.edu Diatonicmodesarethescalesthatresultwhenthetonicisshiftedtodifferentpositionsofthediatonic(major)scale.Giventhe Cmajorscale,forexample,thetonicmaybeleftatC(Ionianor major mode)orshiftedtod(dorian),e(phrygian),f (Lydian),G(Mixolydian)orA(Aeolianor naturalminor ).ManymusicalstylesemploydiatonicmodesbeyondIonian, includingrockandothercontemporarypopularstyles.experimentalstudieshaveshownthatthemajormodeandcommonpracticeminormode(whichisnotthesameasaeolianmode)havepositiveandnegativeemotionalconnotations, respectively.butwhatoftheotherdiatonicmodes?onepossiblehypothesisisthatmodeswithmoreraisedscaledegrees havemorepositive( happier )emotionalconnotations(huron,yim,&chordia,2010).anotherpossiblehypothesisisthatthe connotationsofmodesaredrivenmainlybyfamiliarity,thereforethescalesmostsimilartothemajormode(themostfamiliar modeformostwesternlisteners)wouldbehappiest.thepredictionsofthesetwohypothesesarepartiallyconvergent,but nottotally:inparticular,thelydianmodeispredictedtobehappierthanmajorbythe height hypothesisbutlesshappyby the familiarity hypothesis.inthecurrentexperiment,asetofdiatonicmelodieswascomposed;variantsofeachmelodywere constructedineachofthesixdifferentmodes.inaforced choicedesign,non musicianparticipantsheardpairsofvariants(i.e. thesamemelodyintwodifferentmodes,withafixedtonicofc)andhadtojudgewhichwashappier.thedatareflecta consistentpattern,withhappinessdecreasingasflatsareadded.lydianisjudgedlesshappythanmajor,however,supporting the familiarity hypothesisoverthe height hypothesis. 12EmotionalResponsesto(Modern)Modes PeterMartens*,JeffLarsen TexasTechUniversity,Lubbock,TX,USA *=Correspondingauthor,peter.martens@ttu.edu Bythe16 th century,abipartitecategorizationofmodesandtheiraffectwascommon,basedonthequalityofthemode sthird scalestep.modeswithaminor3 rd abovetheirinitialpitchinthispositionweregroupedtogetherassad,thosewithamajor 3 rd inthisposition,happy(e.g.zarlino,1558).recentresearchhassubstantiatedtheseassociationswithmodernmajorand minorscales(minor=sad,major=happy).thegoalofthisstudyistoexploreifandhowsubjectsdifferentiatescalestructures thatliesomewherebetweenmajorandminorscalesonthebasisofemotionalcontent.sixfour bardiatonicmelodieswere newlycomposed,witheachmelodycastinthesixmosthistorical modern modes:ionian,dorian,phrygian,lydian, Mixolydian,andAeolian.StimuliwerecreatedusingaclassicalguitarsoundwithinLogicsoftware,andeventdensitywasheld constant.inapilotstudysubjectsratedcomposers'intentintermsofelicitinghappinessandsadnessforthreeofthemelodies inallsixmodes.thestimuliwerepresentedinoneoftworandomorders,andsubjectsheardeachstimulusonce.preliminary resultsindicatethatasimplemajor/minorcategorizationdoesnotsufficientlyexplainsubjectresponses.asexpected,the Ionianmode(major)andtheLydianmodewerestronglyassociatedwithhappinessoverall,butnotsignificantlymoresothan Aeolian(naturalminor).Bycontrast,Dorianstoodaloneashavingastrongassociationwithsadness.Phrygianwasweakly happy,whilemixolydianresponseswereneutral.whymightdorianbe,tomisquotenigeltufnel, thesaddestofallmodes? TheDorianmodecontainsaminorthirdandminorseventhscalestep,butamajorsixth.Thisisacommonmixtureof characteristicsfromthemajorandminorscales(e.g.1960sfolkmusic),whichperhapsheightenedarousalwhenlisteningto thesegenerallyminor soundingdorianmelodies,andthustheenlargedeffect.
SMPC2011Programandabstracts,Page: 13 13ProductionandPerceptionofFacialExpressionsduringVocalPerformance StevenR.Livingstone*,CarolinePalmerandMarceloWanderley(1),WilliamFordeThompson(2) (1)McGillUniversity,Montreal,Canada(2)MacquarieUniversity,Sydney,Australia *=steven.livingstone@mcgill.ca Muchempiricalandtheoreticalresearchoverthelastdecadeconcernstheproductionandperceptionoffacialandvocal expression.researchhaspredominantlyfocusedonstaticrepresentationsoffacialexpressions(photographs),despitethefact thatfacialandvocalexpressionsaredynamicandunfoldovertime.theroleofthisdynamicinformationinemotional communicationisunclear.wereporttwoexperimentsontheroleoffacialexpressionsintheproductionandperceptionof emotionsinspeechandsong.inexperiment1,twelvesingerswithvocalexperiencespokeorsungstatementswithoneoffive emotionalintentions(neutral,happy,veryhappy,sadandverysad).participants facialmovementswererecordedwith motioncapture.functionaldataanalyseswereappliedtomarkertrajectoriesfortheeyebrow,lipcorner,andlowerlip. Functionalanalysesofvarianceonmarkertrajectoriesbyemotionindicatedsignificantlydifferenttrajectoriesacrossemotion conditionsforallthreefacialmarkers.emotionalvalencewasdifferentiatedbymovementofthelipcornerandeyebrow.song exhibitedsignificantlylargermovementsthanspeechforthelower lip,butdidnotdiffersignificantlyformotionofthelip cornerandeyebrow.interestingly,movementsinspeechandsongwerefoundpriortotheonsetofvocalproduction,and continuedlongaftervocalproductionhadended.theroleoftheseextra vocalisationmovementswasexaminedinexperiment 2,inwhichparticipantsjudgedtheemotionalvalenceofrecordingsofspeaker singersfromexperiment1.listenersviewed (noaudio)theemotionalintentions(neutral,veryhappy,verysad)indifferentpresentationmodes:pre vocal production, vocal production,andpost vocal production.preliminaryresultsindicatethatparticipantswerehighlyaccurateatidentifying allemotionsduringvocal productionandpost vocal production,butweresignificantlylessaccurateforpre vocalproduction. Thesefindingssuggestthatspeechandsongsharefacialexpressionsforemotionalcommunication,transcendingdifferences inproductiondemands. 14DifferentialEffectsofArousalandPleasantnessinCrossmodalEmotional TransferfromtheMusicaltotheComplexVisualDomain ManuelaM.Marin(1)*,BrunoGingras(2),JoydeepBhattacharya(1)(3) (1)DepartmentofPsychology,Goldsmiths,UniversityofLondon,UK,(2)DepartmentofCognitiveBiology,UniversityofVienna,Austria,(3)Commissionof ScientificVisualization,AustrianAcademyofSciences,Vienna,Austria *=Correspondingauthor,manuela.m.marin@gmail.com Thecrossmodalprimingparadigmisanewapproachtoaddressbasicquestionsaboutmusicalemotions.Recentbehavioural andphysiologicalevidencesuggeststhatmusicalemotionscanmodulatethevalenceofvisuallyevokedemotions,especially thosethatareinducedbyfacesandareemotionallyambiguous(chen,yuan,huang,chen,&li,2008;logeswaran& Bhattacharya,2009;Tan,Spackman,&Bezdek,2007).However,arousalplaysacrucialroleinemotionalprocessing(Lin, Duann,Chen,&Jung,2010;Nielen,Heslenfeld,Heinen,VanStrienen,Witter,Jonker&Veltman,2010;Russell,1980)andmay haveconfoundedtheseprimingeffects.weinvestigatedtheroleofarousalincrossmodalprimingbycombiningmusical primes(romanticpianomusic)differinginarousalandpleasantnesswithcomplexaffectivepicturestakenfromthe InternationalAffectivePictureSystem(IAPS).InExperiment1,thirty twoparticipants(16(8female)musicians,16(8female) non musicians)reportedtheirfeltpleasantness(i.e.valence)andarousalinresponsetomusicalprimesandvisualtargets, presentedseparately.inexperiment2,fortynon musicians(20female)ratedfeltarousalandpleasantnessinresponseto visualtargetsafterhavinglistenedtomusicalprimes.experiment3soughttoruleoutthepossibilityofanyordereffectsofthe subjectiveratingsandresponsesoffourteennon musicianswerecollected.theresultsofexperiment1indicatedthatmusical trainingwasassociatedwithelevatedarousalratingsinresponsetounpleasantmusicalstimuli,whereasgenderaffectedthe coupling strengthbetweenarousalandpleasantnessinthevisualemotionspace.experiment2showedthatmusicalprimes modulatedfeltarousalinresponsetocomplexpicturesbutnotpleasantness,whichwasfurtherreplicatedinexperiment3. Thesefindingsprovidestrongevidenceforthedifferentialeffectsofarousalandpleasantnessincrossmodalemotional transferfromthemusicaltothecomplexvisualdomainanddemonstratetheeffectivenessofcrossmodalprimingparadigms ingeneralemotionresearch.
SMPC2011Programandabstracts,Page: 14 15MusicCanConveyMovementlikeProsodyinSpeech StephenC.Hedger(1)*,HowardC.Nusbaum(1),BertholdHoeckner(1) (1)TheUniversityofChicago:Chicago,IL,U.S.A *=Correspondingauthor,shedger@uchicago.edu Analogvariationintheprosodyofspeechhasrecentlybeenshowntocommunicatereferentialanddescriptiveinformation aboutobjects(shintel&nusbaum,2007).giventhatcomposershaveusedsimilarmeanstoputativelycommunicatewith music,weinvestigatedwhetheracousticvariationofmusicalpropertiescananalogicallyconveydescriptiveinformationabout anobject.specifically,wetestedwhethertemporalstructureinmusicisintegratedintoananalogperceptualrepresentationas anaturalpartoflistening.listenersheardsentencesdescribingobjectsandthesentenceswereunderscoredwithaccelerating ordeceleratingmusic.aftereachsentence musiccombination,participantssawapictureofastillormovingobjectand decidedwhetheritwasmentionedinthesentence.objectrecognitionwasfasterwhenmusicalmotionmatchedvisually depictedmotion.theseresultssuggestthatvisuo spatialreferentialinformationcanbeanalogicallyconveyedand representedbymusic. 16WhatDoesSeeingthePerformerAdd?ItDependsonMusicalStyle,Amountof StageBehavior,andAudienceExpertise CarolLynneKrumhansl(1)*,JenniferHuang(1,2) (1)CornellUniversity,Ithaca,NYUSA(2)HarvardUniversity,CambridgeMAUSA *=Correspondingauthor,clk4@cornell.edu Thepurposeofthisstudywastoexaminetheeffectsofstagebehavior,expertise,composer,andmodalityofpresentationon structural,emotional,andsummaryratingsofpianoperformances.twenty fourmusicallytrainedand24untrained participantsratedtwo minuteexcerptsofpiecesbybach,chopin,andcopland,eachperformedbythesamepianist,whowas askedtovaryhisstagebehaviorfromminimaltonaturaltoexaggerated.participantsratedtheperformancesundereither audio onlyoraudiovisualconditions.therewerestrongeffectsofcomposer,stagebehavior,andresponsescaletype,aswell asinteractionsinvolvingthesethreevariablesandmodalityofpresentation.thecomposer'sstylehadaconsistentlystrong effectontheperformanceevaluations,highlightingtheimportanceofcarefulrepertoireselection.theinteractionbetween expertise,modality,andstagebehaviorrevealedthatnon musiciansperceiveddifferencesacrossthethreedegreesofstage behavioronlyaudiovisuallyandnotintheaudio onlycondition.incontrast,musiciansperceivedthesedifferencesunderboth audiovisualandaudio onlyconditions,withthelowestratingsforminimalstagebehavior.thissuggeststhatvaryingthe degreeofstagebehavioralteredthequalityoftheperformance.inaddition,theparticipantswereaskedtoselecttwo emotionsthatbestcharacterizedeachperformance.theypreferentiallychosemoresubtleemotionsfromhevner's(1936) AdjectiveCircleoverthefiveemotionsofhappiness,sadness,anger,fear,andtendernesstraditionallyusedinmusiccognition studies,suggestingthatthesefiveemotionsarelessapttodescribetheemotionsconveyedthroughmusicalperformance.
SMPC2011Programandabstracts,Page: 15 17EffectsofInteractionswithYoungChildrenonJapaneseWomen sinterpretation ofmusicalbabblings MayumiAdachi* HokkaidoUniversity,Sapporo,Japan *=Correspondingauthor,m.adachi@let.hokudai.ac.jp JapanesemothersandyoungwomentendtointerpretaJapanesetoddler'sbabblingsderivingfrominfant directedspeech contextsasspeech likeandthosefrominfant directedsongcontextsassong like.inthepresentstudy,iinvestigatedwhether interactionswithyoungchildrencouldaffecttheuseofvocalcuesamongjapanesemothers(experiment1)andjapanese youngwomen(experiment2).inexperiment1,23japanesemotherswhoparticipatedinadachiandando(2010)fellinto twogroupsbasedonthescores(0 12)ofhowactivelytheyweresinging/talkingtotheirownchild: active (scores8 12,n= 13)and lessactive (scores3 7,n=10).Eachmother'sdatawereusedtodeterminevocalcuesthatcontributedherown interpretationof50babblingsbymeansofstep wisevariableselectionoflogisticregression,withtheinterpretationas dependentvariable(song likeversusspeech like)and15vocalfeatures(identifiedinadachi&ando,2010)aspredictor variables.inexperiment2,thesameanalyseswillbeconductedwithdataobtainedfromjapaneseyoungwomenwhohad beeninteractingwith6 year oldsoryounger( experienced )andfromtheirmatchedsamplewithoutsuchinteractions ( inexperienced ).ResultsinExperiment1revealedthat11outof13 active motherswereusingparticularcuesconsistently whileonly4outof10 lessactive mothersweredoingso,χ 2 (1,N=23)=4.960,p=.026.Inaddition,amongthemothersusing particularcues,the active mothersusedtheaverageofmorethan3cueswhilethe lessactive mothersusedtheaverageof 1cue.(ResultsinExperiment2willbepresentedatthetalk.)Thepresentstudywillrevealtheroleofinteractionswithyoung childreninthecurrentandtheprospectivemothers'interpretationsofsong like/speech likebabblings.suchinformationmay explainwhysometoddlersproducemorespontaneoussongsthanothers. 18Age relatedchangesinchildren ssinging SandraE.Trehub(1)*,LilyZhou(2),JudyPlantinga(1),MayumiAdachi(3) (1)UniversityofToronto,Ontario,Canada,(2)McMasterUniversity,Hamilton,Ontario,Canada,(3)HokkaidoUniversity,Sapporo,Japan *=Correspondingauthor,sandra.trehub@utoronto.ca Severalstudieshavedocumentedage relatedimprovementsinchildren ssinging,usuallybyexpertratingsratherthan measurement.however,nostudyhasattemptedtoidentifyfactorsthatmaycontributetosuchimprovement.adultssingless accuratelywithlyricsthanwithaneutralsyllable(berkowska&dallabella,2009),highlightingthedemandsofword retrieval.perhapsage relateddifferencesinsingingaccuracyareattributable,inpart,toage relatedchangesinmemory.inthe presentstudywefocusedonintervalaccuracyandsingingrateinchildren srenditionofafamiliarsong(abc,twinkle)sung withwordsoronthesyllable/la/.childrenwere4 12years,24at4 6,7 9,and10 12years.Thefirst17beatsofeach performancewereanalyzedbymeansofpraat.durationofthefirsttwomeasuresprovidedanestimateofsingingrate.a regressionanalysiswithgender,age,andsingingraterevealedsignificanteffectsofage(greaterpitchaccuracyatolderages) andsingingrate(greateraccuracywithslowersinging)inperformanceswithlyrics.regressionanalysisonsongssungon/la/ revealednodifferences,callingintoquestionclaimsofincreasingsingingproficiencyinthisagerange.atwo wayanova (intervalsize,lyrics/syllables)revealedsignificanteffectsofintervalsize(greaterpitchdeviationswithlargerintervals), F(4,284)=44.79,p<0.001,andlyrics(greaterpitchdeviationswithlyrics),F(1,71)=9.18,p=.003.Regressionanalysisalso revealedthatageandsingingratehadsignificanteffectsonkeystability,asreflectedindeviationsfromthetonic,butonlyfor performanceswithlyrics.inshort,theneedtoretrievelyricsandtuneshasadverseconsequencesforchildren,asreflectedin reducedpitchaccuracy,poorkeystability,andslowsingingrate.wesuggestthatthedevelopmentofsingingproficiencyin childhoodcouldbestudiedmoreproductivelybymeansofpitchandintervalimitation.
SMPC2011Programandabstracts,Page: 16 19DoInfantsPerceivetheBeatinMusic?ANewPerceptualTest AniruddhD.Patel(1)*,JohnR.Iversen(1),MelissaBrandon(2),JennySaffran(2) (1)TheNeurosciencesInstitute,SanDiego,USA(2)UniversityofWisconsin,Madison,USA *=Correspondingauthor,apatel@nsi.edu Beatperceptionisfundamentaltomusicperception.Howearlydoesthisabilitydevelop?Whileinfantsdonotsynchronize theirmovementstoamusicalbeat(zentner&eerola,2010),itispossibletheycanperceivethebeat,justassophisticated speechperceptionabilitiesprecedetheabilitytotalk.henceevidenceforinfantbeatperceptionmustcomefromperceptual tests.arecentevent relatedpotential(erp)studyofbeatperceptioninsleepingnewbornssuggestedthattheyrecognized theomissionofthedownbeatinadrumpattern(winkleretal.,2009),butthedownbeatomissionstimulus(unlikeomissions atotherpositions)wascreatedbysilencingtwodrumpartsatonce,makingitpossiblethatthebrainresponsewastoachange inthetextureofthesoundratherthanareflectionofbeatperception.otherstudiesofinfantmeterperceptionhaveused cross modalapproaches(e.g.,phillips Silver&Trainor,2005)orcross culturalapproaches(e.g.,hannon&trehub,2005),but thesensitivitiesdemonstratedbyinfantsinthesestudiesmaybeexplainableonthebasisofgroupingperceptionand/or sensitivitytoeventdurationpatterns,withoutinvokingbeatperception.thecurrentstudyusedanovelperceptualtestto examinebeatperceptionin7 8montholdinfants.ThiswasasimplifiedversionoftheBAT(BeatAlignmentTest,Iversen& Patel,2008),inwhichametronomicbeeptrackisoverlaidonlongexcerptsofrealmusic(popularBroadwayinstrumental tunes).thebeepswereeitheronthebeat,toofast,ortooslow.apreferentiallookingparadigmwasused,anditwasfound thatinfantspreferredthemusicwiththeon beatoverlaytracks,suggestingthattheydoinfactperceivethebeatofcomplex music.thepresentationwillincludeadiscussionofhowthebatmightbeimprovedforfutureresearchoninfantandadult beatperception. 20TheDevelopmentofSensitivitytoKeyMembershipandHarmonyinYoung Children KathleenA.Corrigall(1)*,LaurelJ.Trainor(1,2,), (1)McMasterInstituteforMusicandtheMind,Hamilton,Canada,(2)RotmanResearchInstitute,BaycrestHospital,Toronto,Canada *=Correspondingauthor,corrigka@mcmaster.ca EvenWesternadultswithnoformalmusictraininghaveimplicitknowledgeofkeymembership(whichnotesbelonginakey) andharmony(chordsandchordprogressions).however,littleresearchhasexploredthedevelopmentaltrajectoryofthese skills,especiallyinyoungchildren.thus,ourprimarygoalwastoinvestigate4 and5 year olds knowledgeofkey membershipandharmony.oneachtrial,childrenwatchedvideosoftwopuppetsplayinga2 to3 barmelodyorchord sequenceandwereaskedtogiveaprizetothepuppetthatplayedthebestsong.onepuppetplayedastandardversionthat followedwesternharmonyandvoiceleadingrulesandalwaysendedonthetonic,whiletheotherplayedoneofthreedeviant versions:1)atonal,whichdidnotestablishanyparticularkey,2)unexpectedkey,whichreplicatedthestandardversion exceptforthelastnoteorchord,whichwentoutsidethekey,and3)unexpectedharmony,whichreplicatedthestandard versionexceptforthelastnoteorchord,whichendedonthesubdominant.childrenwereassignedtooneofthethreedeviant conditions,andcompletedfourtrialseachofmelodiesandchords.ourdependentmeasurewastheproportionoftrialson whichchildrenselectedthepuppetthatplayedthestandard.resultsofthe354 year oldsand365 year oldstestedtodate revealedthat5 year oldsselectedthestandardversionsignificantlymoreoftenthanpredictedbychanceforbothmelodies andchordsintheatonalandunexpectedkeyconditions.intheunexpectedharmonycondition,5 year olds performancedid notdifferfromchanceforeithermelodiesorchords.four year oldsperformedatchanceinallconditions.theseresults indicatethataspectsofmusicalpitchstructureareacquiredduringthepreschoolyears.
SMPC2011Programandabstracts,Page: 17 21InvestigatingMothers LiveSingingandSpeakingInteractionwithPreterm InfantsinNICU:PreliminaryResults ManuelaFilippa(1)*,MayaGratier(1) (1) UniversitéParisOuestNanterreLaDéfense,Paris,France, *mgfilippa@libero.it Vocalcommunicationbetweenmothersandinfantsiswelldocumentedinthefirstmonthsoflife(Gratier&Apter,2009),but fewobservationsinvolvedpreterminfants.thisarticlereportsonthetheoreticalunderpinningsofthestudyofmaternal singing,consideredasanimportantrelationalbasedinterventioninearlydyadiccommunicationbetweenmothersand preterminfants.todate,10outof20pretermneonateshavebeenstudied.theirweightatbirthwasbetween950and2410g andtheentrycriteriaatthetimeoftherecordingswere(1)>29weekspca,(2)>1000g,(3)stablecondition(absenceof mechanicalventilation,noadditionaloxygenneeded,nospecificpathologicalconditions).allinfantsaretestedfor6days, duringtheirhospitalstayintheirroomintheirownincubators,onehouraftertheirafternoonfeeding.allmothersinvolved areasked,on3differentdays,tospeakandsingtotheirinfants.beforeandbetweenthesedays,adaywithnostimulation providescomparisondata.thesessionsarevideoandaudiorecordedbothduringthestimulationandalsofor5minutes beforeandafterthestimulation.clinicalparametersareautomaticallyrecordedeveryminuteand criticalevents are marked;individualbehavioralandinteractionalreactionresponsesaremeasuredasinfantengagementsignals.during maternalvocalstimulationwefoundanincreaseofhrandoxygensaturation(p>0.05);adecreaseinstandarddeviationof clinicalparameters,adecreaseof criticalevents andanincreaseofcalmalertnessstates.theresultsindicatethatthe maternallivespeakingandsingingstimulationhasanactivationeffectoninfant,asweobserveanintensificationoftheprotointeraction,ofthecalmalertnessstates(als,1994)inaclinicallystablecondition,withasignificantincreaseofhrandoxygen Saturation(p>0.05). 22Koechlin'sVolume:EffectsofNativeLanguageandMusicalTrainingon PerceptionofAuditorySizeamongInstrumentTimbres FrédéricChiasson(1)(2)(3)*,CarolineTraube(1)(2)(3),ClémentLagarrigue(1)(2),BennethSmith(3)(4)andStephen McAdams(3)(4) (1)Observatoireinterdisciplinairederechercheetcréationenmusique(OICRM),Montréal,Canada,(2)Laboratoireinformatique,acoustiqueetmusique(LIAM), Facultédemusique,UniversitédeMontréal,Canada,(3)CentreforInterdisciplinaryResearchinMusic,MediaandTechnology(CIRMMT),Montréal,Canada,(4) SchulichSchoolofMusic,McGillUniversity,Montréal,Canada. *=Correspondingauthor,frederic.chiasson@umontreal.ca CharlesKoechlin'sorchestrationtreatise(Traitédel'orchestration)ascribesdifferentdimensionstotimbrethanthoseusually discussedinmultidimensionalscalingstudies:volumeorlargeness,relatedtoauditorysize,andintensité,relatedtoloudness. Koechlingivesameanvolumescaleformostorchestralinstruments.Studiesshowthatauditorysizeperceptionexistsfor manysoundsources,butnoneprovesitsrelevanceforinstrumentsfromdifferentfamilies.forbothexperimentsofthisstudy, wehavedevelopedmethodsandgraphicalinterfacesfortestingvolumeperception.samplesofeightorchestralinstruments fromtheelectronicmusicstudiooftheuniversityofiowa,playingbsandfsmezzoforteinoctaves3,4and5(wherea4is 440Hz)wereused.Wekeptthefirstsecondofallsamples,keepingattacktransients,andaddedfade outtothelast100ms. Foreachpitchcategory,sampleswereequalizedinpitch,butnotinloudness,tokeeptheloudnessdifferencesinaconcert situation.task1requiredparticipantstoordereightsetsofsamplesonalargeness(grosseurinfrench)scalefrom"least large"(moinsgros)to"largest"(plusgros).task2requiredthemtoevaluatethesounds'largenessonaratioscalecomparedto areferencesamplewithavalueof100.participantswerecomparedaccordingtonativelanguage(englishvsfrench),musical training(professionalmusiciansvsamateursandnonmusicians)andhearing(goodvsminorhearingloss).resultssuggest thatparticipantsshareacommonperceptuallargenessamonginstrumenttimbresfromdifferentfamilies.thiscommon perceivedlargenessiswellcorrelatedwithkoechlin'svolumescale.nativelanguage,musicaltrainingandhearinghaveno significanteffectonresults.theseresultsprovidenewanglesfortimbreresearchandraisequestionsabouttheinfluenceof loudnessequalizationinmoststudiesontimbre.