Proceedings of Meetings on Acoustics

Size: px
Start display at page:

Download "Proceedings of Meetings on Acoustics"

Transcription

1 Proceedings of Meetings on Acoustics Volume 19, ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4pSCb: Production and Perception I: Beyond the Speech Segment (Poster Session) 4pSCb34. Effects of musical experience on perception of audiovisual synchrony for speech and music Dawn Behne*, Magnus Alm, Aleksander Berg, Thomas Engell, Camilla Foyn, Canutte Johnsen, Thulasy Srigaran and Ane E. Torsdottir *Corresponding author's address: Psychology, NTNU, Dragvoll A, Trondheim, NO7491, Sør-Trondelag, Norway, Perception of audiovisual synchrony relies on matching temporal attributes across sensory modalities. To investigate the influence of experience on cross-modal temporal integration, the effect of musical experience on the perception of audiovisual synchrony was studied with speech and music stimuli. Nine musicians and nine non-musicians meeting strict group criteria provided simultaneity judgments to audiovisual /ba/ and guitar-strum stimuli, each with 23 levels of audiovisual alignment. Although results for the speech and music stimuli differed, the two groups did not differ in their responses to the two types of stimuli. Consistent with previous research, responses from both groups show less temporal sensitivity to stimuli with video-lead than audio-lead. No significant between-group difference was found for video-lead thresholds. However, both for the speech and music stimuli, musicians had an audio-lead threshold significantly closer to the point of physical synchrony than non-musicians, indicating the musicians greater acuity for audiovisual temporal coherence. Overall this leads to a non-significant tendency for a narrower window of synchrony for musicians than non-musicians. Findings are consistent with predictions that cross-modal temporal experience increases threshold acuity for audio-lead, but not for video-lead, and also support theories suggesting greater efficiency with relevant experience. Published by the Acoustical Society of America through the American Institute of Physics 2013 Acoustical Society of America [DOI: / ] Received 28 Jan 2013; published 2 Jun 2013 Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 1

2 INTRODUCTION Natural speech perception makes use of multisensory information. The unity assumption suggests that when that information reaches the different senses, the more properties they have in common, such as occurring close in time, the more likely the brain is assumed to treat them as having a single source (e.g., Welch and Warren, 1980). Precision in relating audio and visual information to a mutual source increases neural processing time (van Wassenhove, Grant, and Poeppel, 2005) and gives a perceptual benefit (e.g., Grant and Seitz, 1998; Sumby and Pollack, 1954). The window of perceived subjective simultaneity for audiovisual speech is roughly a few hundred milliseconds (e.g., Conrey and Pisoni, 2006; Grant et al., 2004; Hay-McCutcheon, Pisoni and Hunt, 2009; McGrath and Summerfield, 1985). However, the relative alignment of the audio and video signals has differing effects: to be perceived as asynchronous, a greater physical synchrony of video preceding audio is needed compared to audio preceding video, where a smaller audiovisual misalignment is needed for the asynchrony to be perceived (e.g., Conrey and Pisoni, 2006; Hay-McCutcheon et al., 2009). The asymmetry of subjective perception around the point of audiovisual synchrony has generally been ascribed to perceptual accommodation to differences in the propagation speeds of sound and light and their corresponding neural processing times for the different senses (Keetels and Vroomen, 2012). In short, these taken together, perception of visual-lead asynchronies will be most affected by the distance between the audiovisual event and the perceiver, and perception of the audio-lead asynchronies are primarily affected by neural processing. Furthermore, since articulatory movement precedes the speech signal (Smeele, Sittig, and Heuven, 1994), the visual information may provide the perceiver with a predictor for the auditory signal (Grant et al., 2004). From a statistical learning perspective, this may decrease the likelihood of experiencing audio information preceding visual cues and develop a perceptual limitation for misalignments when the audio signal precedes the video, but not the reverse. If so, this suggests that experience may influence the width and relative asymmetry associated with the window of perceived simultaneity, in particular by increasing sensitivity for an audio signal preceding a video signal. Previous research has shown individual differences in the window of perceived simultaneity in audiovisual speech perception (e.g., Grant and Seitz, 1998) and that the width of this window can have lasting effects from training (e.g., Powers, Hillock and Wallace, 2009). Summerfield (1992) recounts an informal observation that musicians may generally be more sensitive to audiovisual asynchrony than non-musicians, a topic which has received some attention (e.g., Vatakis and Spence 2006a, b). In a recent study, Lee and Noppeney (2011) compared non-musicians and amateur pianists to test perception of audiovisual simultaneity in music and speech. They found that, compared to non-musicians, those with some piano training had a narrower window of perceived synchrony for piano stimuli, although this did not significantly transfer to perceived audiovisual simultaneity in speech. This lack of transfer effect from piano training to speech processing (Lee and Noppeney, 2011) is somewhat surprising given other research showing extension of musical experience to speech processing (e.g., Wong et al, 2007). A possible explanation stems from an indication by Vatakis and Spence (2006a) that sensitivity to audiovisual synchrony may differ for piano on the one hand, and guitar stimuli on the other. In particular, testing non-musicians they suggest that for piano videos temporal order judgments were more sensitive when the video signal preceded the audio, whereas for guitar stimuli, sensitivity was greatest when the audio preceded the video signal, as has been broadly observed for audiovisual speech processing. These findings suggest that the lack of transfer effect in Lee and Noppeney (2011) may be related to the use of piano stimuli, and that the similarity in audiovisual asynchrony perception for guitar and speech may make guitar stimuli a more sensitive baseline for studying the transfer of musical experience to perception of audiovisual synchrony in speech. In the current study, highly skilled musicians were compared with non-musicians in an audiovisual simultaneity judgment task with audiovisual syllables (/ba/) and audiovisual music stimuli (guitar strum). If sensitivity in the perception of audiovisual synchrony transfers from music experience to speech, musicians relative to non-musicians are expected to be more sensitive to audiovisual asynchrony for both guitar and speech, with a narrower window of synchrony and in particular, an increased sensitivity to an audio signal preceding the video. Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 2

3 METHOD Participants Participants included 19 young adults (20-30 years) of which nine were musicians (4 male, 5 female; M=23 years, SD=1.4 years) and ten were non-musicians (7 male, 3 female; M=24, SD=3.3). All participants had Norwegian as their native language. Participants completed a questionnaire and were pretested to ensure righthandedness (variant of the Edinburgh Handedness Inventory, Oldfield, 1971), normal to corrected visual acuity (Snellen test) and color vision (Ishihara, 1974), as well as normal hearing (PTA threshold of 15dB HL for 250 to 4000 Hz, British Society of Audiology, 2004). Participants all reported less than one year of weekly dance training. The musicians were registered students in Music Performance Studies or Musicology at NTNU where admission requires meeting strict criteria on theoretical and practical musical evaluations in addition to advanced musical skills on a primary and secondary instrument. None of the musicians had voice as their main instrument. Non-musicians had no more than the one year of weekly musical experience common in elementary school. On a ten-point scale, musicians self-reported a very strong interest in music (M=9.9) whereas non-musicians reported having a neutral interest in it (M=5.9). Stimuli Stimuli included audiovisual speech materials and audiovisual music materials. All were recorded in a soundinsulated studio in the Speech Lab, at the Department of Psychology, NTNU. The stimuli are illustrated in Figure 1. The audiovisual speech materials were based on recordings of the syllables /ba/ and /ga/ produced by a young adult female native speaker of Norwegian. The audiovisual music materials were based on recordings of a strum of the top string of a guitar and xylophone stroke. /ga/ and the xylophone stroke are not addressed in the current paper. Both sets of recordings were made using a PDWF800 Sony XDCAM HD422 camcorder and an external Røde NT1-A microphone. The visual dimensions and information ratio of the speech and music materials were comparable. The videos had a resolution of 1133x850 pixels. Audio was adjusted to 68 db and edited in Praat 5.1, then aligned with the videos using Avid Media Composer (v. 6.0). All stimuli were cut to 35 frames where each frame was 40 ms. (1400 ms). All stimuli had the burst (from speech or strum) in the 13 th frame. Both for the speech and music materials, 23 levels of audiovisual alignment were implemented with 40ms (1 frame) steps, from -440 ms audio-lead to 440 ms audio-lead. (a) (b) FIGURE 1. The starting frame from the (a) /ba/ videos and (b) guitar strum videos Procedure The pretests and the main experiment were carried out in the Speech Lab at the Department of Psychology Department, NTNU. In the main experiment up to 5 participants were tested at a time in a partially sound-insulated room, where each participant was seated approximately 50 cm in front of a 24 imac in a four-legged chair to reduce movement. The five computers were set to have the same brightness and sound level (approximately 68 db). A simultaneity judgment task was set up and data collection was carried out using Superlab (v 4.5). In a total of 736 trials, eight randomized repetitions of the 92 stimuli (23 audiovisual alignments x 4 stimulus types) were blocked by stimulus type. Speech and music blocks were alternated, but otherwise were presented in random order. In each trial, a blank screen was presented for 1 s., followed by an audiovisual stimulus. The video was 1133x850 pixels, presented in the center of the monitor. The audio was presented binaurally over AKG K273 studio headphones at 68 ±1 dba. The participants task was to keep focused on the center of the monitor and indicate as Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 3

4 quickly as possible, using a Cedrus RB540 response box, whether the audio and visual components of the stimulus were synchronized (sync, async). Participants had up to 2 s. from the start of the stimulus to respond. The participant s simultaneity judgments were recorded. Between blocks participants had 30-second breaks. The pretests and simultaneity judgment task each took approximately 30 minutes, for a total experiment time of approximately 1 hour. Although two speech stimuli (/ba/ and /ga/) and two music stimuli (guitar strum and xylophone stroke) are included in the experiment, only the /ba/ and guitar strum are addressed here. RESULTS Data from each participant for /ba/ and the guitar strum were plotted with the percentage of responses as a function of audiovisual alignment using Sigma Plot (v.12). As illustrated in Figure 2, a Gaussian curve was fit to the data where the point of subjective synchrony (PSS), audio-lead threshold (ALT), video-lead threshold, and full width of the half maximum (FWHM) were identified (see Conrey and Pisoni, 2006; Hay-McCutcheon et al, 2009). The point of subjective synchrony (PSS) is defined as the x-value for the peak of the Gaussian curve. The audio-lead threshold (ALT) is the x-value which is less than the PSS (i.e. on the left side of the curve) where the y-value is 50%. The video-lead threshold (VLT) is the x-value which is greater than the PSS (i.e on the right side of the curve) where the y-value is 50%. The full width of the half maximum (FWHM) corresponds to the synchrony window, and is the absolute value of the summation of ALT and VLT. (Audio lead) Audiovisual alignment (Video lead) FIGURE 2. Modal Gaussian-fitted curve for responses where the percentage of synchronous responses are plotted as a function of the level of audiovisual alignment. Negative values having audio-lead and positive values have visual lead. The point of subjective synchrony (PSS), the audio-lead threshold (ALT), video-lead threshold (VLT) and full width of the half maximum (FWHM), corresponds to the synchrony window, are indicated in the figure. A mixed analyses of variance was carried out with music experience (musicians, non-musicians) as a between subjects variable, stimulus type (/ba/, guitar strum) as a repeated measure, and PSS, ALT, VLT and FWHM as dependant variables. Results are presented in Figures 3 and 4. As illustrated in Figure 3, simultaneity responses for the speech and music stimuli differed significantly in ALT [F(1,17)=104.66, p=.001], VLT [F(1,17)=151.01, p=.001], and PSS [F(1,17)=126.10, p=.001]. Response for /ba/ were more centered around the point of physical synchrony than responses for the guitar strum, which consistent with Vatakis and Spence (2006b), was shifted in favor of video-lead responses. Notably, no interactions between stimulus type and group were observed, demonstrating that the musicians and non-musicians did not differ in their responses to the two types of stimuli. A comparison of simultaneity judgments from the musicians and nonmusicians, illustrated in Figure 4, shows that responses from both groups had less temporal sensitivity to stimuli with video-lead than audio-lead which is broadly consistent with previous research (e.g., Conrey and Pisoni, 2006; Hay-McCutcheon et al., 2009). No significant between-group difference was found for VLT or FWHM. However, both for the speech and music stimuli, musicians, compared to non-musicians, had an ALT [F(1,17)=11.84, p=.003] which is significantly closer to Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 4

5 the point of physical synchrony as well as a significantly later PSS [F(1,17)=7.21, p=.016], likely related to the shifted ALT. FIGURE 3. Normalized percent simultaneity responses for /ba/ (solid line) and the guitar strum (broken line) averaged across participants and plotted as a function of audiovisual alignment. (Audio lead) Audiovisual alignment (Video lead) (Audio lead) Audiovisual alignment (Video lead) (Audio lead) Audiovisual alignment (Video lead) (a) (b) FIGURE 4. Normalized percent simultaneity responses averaged across non-musicians (solid line) and musicians (broken line) and plotted as a function of audiovisual alignment for (a) the guitar strum and (b) /ba/. DISCUSSION AND CONCLUSIONS Overall results leads to a non-significant tendency for a narrower window of synchrony for musicians than nonmusicians. Although no reliable difference was observed for FWHM, i.e. the width of the synchrony window, as predicted, results for the audio-lead threshold reflect a significantly different perception of audiovisual simultaneity between skilled musicians and non-musicians, with musicians showing a greater sensitivity to audio-lead than nonmusicians. Furthermore, contrary to Lee and Noppeney (2011), the musicians greater sensitivity for audiovisual synchrony spans both guitar and speech stimuli, indicating a transfer from musical experience to speech processing. Findings are consistent with predictions that cross-modal temporal experience increases threshold acuity for audiolead, but not for video-lead, and also support theories suggesting greater efficiency with relevant experience. Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 5

6 ACKNOWLEDGMENTS The authors thank the Music Department, NTNU for collaboration in contacting relevant music students for the study, and all the participants for their willingness to contribute their time and simultaneity responses to this study. This study was carried out as part of a course project in the spring semester of 2012 supervised by the first author with assistance from the second author whose previous research this current study is based on. The names of the third through eighth authors are in alphabetical order and do not reflect the authors contribution. REFERENCES British Society of Audiology. (2004). Recommended procedure: Pure tone air and bone conduction threshold audiometry with and without masking and determination of uncomfortable loudness levels, Last viewed June 3, Boersma, P., and Weenink, D. (2009). PRAAT: doing phonetics by computer (Version 5.1) [Computer program], (retrieved February 8, 2009) Conrey, B., and Pisoni, D. B. (2006). Auditory-visual speech perception and synchrony detection for speech and nonspeech signals, J. Acoust. Soc. Am. 119, Grant, K., Greenberg, S., Poeppel, D., and van Wassenhove, V. (2004). Effects of spectro-temporal asynchrony in auditory and auditory-visual speech processing, Semin. Hear. 3, Grant, K., and Seitz, P. (1998). Measures of auditory-visual integration in nonsense syllables and sentences, J. Acoust. Soc. Am. 104, Hay-McCutcheon, M., Pisoni, D., and Hunt, K. (2009). Audiovisual asynchrony detection and speech perception in hearingimpaired listeners with cochlear implants: A preliminary analysis, Int. J. Audiol. 48, Ishihara, S. (1974). The series of plates designed for a a test for colour-blindness. Tokyo: Kanehara Shuppan. Keetels, M., and Vroomen, J. (2012). Perception of synchrony between the senses, in Frontiers in the Neural Bases of Multisensory Processes, edited by M. M. Murray and M. T. Wallace, (Taylor and Francis Group, London), pp McGrath, M., and Summerfield, Q. (1985). Intermodal timing relations and audio-visual speech recognition by normal-hearing adults, J. Acoust. Soc. Am. 77, Oldfield, R. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, Petrini, K., Dahl, S., Rocchesso, D., Waadeland, C. H., Avanzini, F., Puce, A., and Pollick, F. (2009). Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony, Exp. Brain Res. 198, Powers, A. 3rd., Hillock, A., and Wallace, M. (2009). Perceptual training narrows the temporal window of multisensory binding, J. Neurosci. 29, Smeele, P. M. T., Sittig, A., van Heuven, V. (1994): "Temporal organization of bimodal speech information", In ICSLP-1994, Stekelenburg, J. J., & Vroomen, J. (2007). Neural correlates of multisensory integration of ecologically valid audiovisual events. Journal of Cognitive Neuroscience, 19(12), Sumby, W. H., and Pollack, I. (1954). Visual contribution to speech intelligibility in noise, J. Acoust. Soc. Am. 26, Summerfield, Q. (1992). Lipreading and audio-visual speech perception, Philos. Trans. R. Soc. London, Ser. A, van Wassenhove, V., Grant, K. W., and Poeppel, D. (2005). Visual speech speeds up the neural processing of auditory speech. PNAS, 102(4): Vatakis A, and Spence C (2006a). Audiovisual synchrony perception for speech and music using a temporal order judgment task. Neurosci Lett 393:40 44 Vatakis A, and Spence C. (2006b). Audiovisual synchrony perception for music, speech, and object actions. Brain Res 1111: Welch R.B., Warren D.H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88, Wong PC, Skoe E, Russo NM, Dees T, Kraus N (2007) Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 10: Proceedings of Meetings on Acoustics, Vol. 19, (2013) Page 6

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 1pPPb: Psychoacoustics

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

MEASURING LOUDNESS OF LONG AND SHORT TONES USING MAGNITUDE ESTIMATION

MEASURING LOUDNESS OF LONG AND SHORT TONES USING MAGNITUDE ESTIMATION MEASURING LOUDNESS OF LONG AND SHORT TONES USING MAGNITUDE ESTIMATION Michael Epstein 1,2, Mary Florentine 1,3, and Søren Buus 1,2 1Institute for Hearing, Speech, and Language 2Communications and Digital

More information

Processing Linguistic and Musical Pitch by English-Speaking Musicians and Non-Musicians

Processing Linguistic and Musical Pitch by English-Speaking Musicians and Non-Musicians Proceedings of the 20th North American Conference on Chinese Linguistics (NACCL-20). 2008. Volume 1. Edited by Marjorie K.M. Chan and Hana Kang. Columbus, Ohio: The Ohio State University. Pages 139-145.

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Musical Acoustics Session 3pMU: Perception and Orchestration Practice

More information

Measurement of overtone frequencies of a toy piano and perception of its pitch

Measurement of overtone frequencies of a toy piano and perception of its pitch Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,

More information

A sensitive period for musical training: contributions of age of onset and cognitive abilities

A sensitive period for musical training: contributions of age of onset and cognitive abilities Ann. N.Y. Acad. Sci. ISSN 0077-8923 ANNALS OF THE NEW YORK ACADEMY OF SCIENCES Issue: The Neurosciences and Music IV: Learning and Memory A sensitive period for musical training: contributions of age of

More information

TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM)

TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM) TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM) Mary Florentine 1,2 and Michael Epstein 1,2,3 1Institute for Hearing, Speech, and Language 2Dept. Speech-Language Pathology and Audiology (133

More information

Client centred sound therapy selection: Tinnitus assessment into practice. G D Searchfield

Client centred sound therapy selection: Tinnitus assessment into practice. G D Searchfield Client centred sound therapy selection: Tinnitus assessment into practice G D Searchfield Definitions Sound (or Acoustic) therapy is a generic term used to describe the use of sound to have a postive effect

More information

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos Friberg, A. and Sundberg,

More information

A 5 Hz limit for the detection of temporal synchrony in vision

A 5 Hz limit for the detection of temporal synchrony in vision A 5 Hz limit for the detection of temporal synchrony in vision Michael Morgan 1 (Applied Vision Research Centre, The City University, London) Eric Castet 2 ( CRNC, CNRS, Marseille) 1 Corresponding Author

More information

Acoustic and musical foundations of the speech/song illusion

Acoustic and musical foundations of the speech/song illusion Acoustic and musical foundations of the speech/song illusion Adam Tierney, *1 Aniruddh Patel #2, Mara Breen^3 * Department of Psychological Sciences, Birkbeck, University of London, United Kingdom # Department

More information

AUD 6306 Speech Science

AUD 6306 Speech Science AUD 3 Speech Science Dr. Peter Assmann Spring semester 2 Role of Pitch Information Pitch contour is the primary cue for tone recognition Tonal languages rely on pitch level and differences to convey lexical

More information

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,

More information

Informational Masking and Trained Listening. Undergraduate Honors Thesis

Informational Masking and Trained Listening. Undergraduate Honors Thesis Informational Masking and Trained Listening Undergraduate Honors Thesis Presented in partial fulfillment of requirements for the Degree of Bachelor of the Arts by Erica Laughlin The Ohio State University

More information

Expressive performance in music: Mapping acoustic cues onto facial expressions

Expressive performance in music: Mapping acoustic cues onto facial expressions International Symposium on Performance Science ISBN 978-94-90306-02-1 The Author 2011, Published by the AEC All rights reserved Expressive performance in music: Mapping acoustic cues onto facial expressions

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional

More information

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. BACKGROUND AND AIMS [Leah Latterner]. Introduction Gideon Broshy, Leah Latterner and Kevin Sherwin Yale University, Cognition of Musical

More information

Understanding PQR, DMOS, and PSNR Measurements

Understanding PQR, DMOS, and PSNR Measurements Understanding PQR, DMOS, and PSNR Measurements Introduction Compression systems and other video processing devices impact picture quality in various ways. Consumers quality expectations continue to rise

More information

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC Lena Quinto, William Forde Thompson, Felicity Louise Keating Psychology, Macquarie University, Australia lena.quinto@mq.edu.au Abstract Many

More information

Correlation between Groovy Singing and Words in Popular Music

Correlation between Groovy Singing and Words in Popular Music Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Correlation between Groovy Singing and Words in Popular Music Yuma Sakabe, Katsuya Takase and Masashi

More information

Brian C. J. Moore Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England

Brian C. J. Moore Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, England Asymmetry of masking between complex tones and noise: Partial loudness Hedwig Gockel a) CNBH, Department of Physiology, University of Cambridge, Downing Street, Cambridge CB2 3EG, England Brian C. J. Moore

More information

The Relationship Between Auditory Imagery and Musical Synchronization Abilities in Musicians

The Relationship Between Auditory Imagery and Musical Synchronization Abilities in Musicians The Relationship Between Auditory Imagery and Musical Synchronization Abilities in Musicians Nadine Pecenka, *1 Peter E. Keller, *2 * Music Cognition and Action Group, Max Planck Institute for Human Cognitive

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND

More information

Colour-influences on loudness judgements

Colour-influences on loudness judgements Proceedings of th International Congress on Acoustics, ICA 1 3 7 August 1, Sydney, Australia PACS: 3..Cb, 3..Lj ABSTRACT Colour-influences on loudness judgements Daniel Menzel, Norman Haufe, Hugo Fastl

More information

Precedence-based speech segregation in a virtual auditory environment

Precedence-based speech segregation in a virtual auditory environment Precedence-based speech segregation in a virtual auditory environment Douglas S. Brungart a and Brian D. Simpson Air Force Research Laboratory, Wright-Patterson AFB, Ohio 45433 Richard L. Freyman University

More information

Temporal Envelope and Periodicity Cues on Musical Pitch Discrimination with Acoustic Simulation of Cochlear Implant

Temporal Envelope and Periodicity Cues on Musical Pitch Discrimination with Acoustic Simulation of Cochlear Implant Temporal Envelope and Periodicity Cues on Musical Pitch Discrimination with Acoustic Simulation of Cochlear Implant Lichuan Ping 1, 2, Meng Yuan 1, Qinglin Meng 1, 2 and Haihong Feng 1 1 Shanghai Acoustics

More information

Brain.fm Theory & Process

Brain.fm Theory & Process Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as

More information

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Bulletin of the Council for Research in Music Education Spring, 2003, No. 156 Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Zebulon Highben Ohio State University Caroline

More information

Enhanced timing abilities in percussionists generalize to rhythms without a musical beat

Enhanced timing abilities in percussionists generalize to rhythms without a musical beat HUMAN NEUROSCIENCE ORIGINAL RESEARCH ARTICLE published: 10 December 2014 doi: 10.3389/fnhum.2014.01003 Enhanced timing abilities in percussionists generalize to rhythms without a musical beat Daniel J.

More information

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When

More information

Do Zwicker Tones Evoke a Musical Pitch?

Do Zwicker Tones Evoke a Musical Pitch? Do Zwicker Tones Evoke a Musical Pitch? Hedwig E. Gockel and Robert P. Carlyon Abstract It has been argued that musical pitch, i.e. pitch in its strictest sense, requires phase locking at the level of

More information

The Effects of Study Condition Preference on Memory and Free Recall LIANA, MARISSA, JESSI AND BROOKE

The Effects of Study Condition Preference on Memory and Free Recall LIANA, MARISSA, JESSI AND BROOKE The Effects of Study Condition Preference on Memory and Free Recall LIANA, MARISSA, JESSI AND BROOKE Introduction -Salamè & Baddeley 1988 Presented nine digits on a computer screen for 750 milliseconds

More information

MEMORY & TIMBRE MEMT 463

MEMORY & TIMBRE MEMT 463 MEMORY & TIMBRE MEMT 463 TIMBRE, LOUDNESS, AND MELODY SEGREGATION Purpose: Effect of three parameters on segregating 4-note melody among distraction notes. Target melody and distractor melody utilized.

More information

Facilitation and Coherence Between the Dynamic and Retrospective Perception of Segmentation in Computer-Generated Music

Facilitation and Coherence Between the Dynamic and Retrospective Perception of Segmentation in Computer-Generated Music Facilitation and Coherence Between the Dynamic and Retrospective Perception of Segmentation in Computer-Generated Music FREYA BAILES Sonic Communications Research Group, University of Canberra ROGER T.

More information

Comparison, Categorization, and Metaphor Comprehension

Comparison, Categorization, and Metaphor Comprehension Comparison, Categorization, and Metaphor Comprehension Bahriye Selin Gokcesu (bgokcesu@hsc.edu) Department of Psychology, 1 College Rd. Hampden Sydney, VA, 23948 Abstract One of the prevailing questions

More information

Subjective evaluation of common singing skills using the rank ordering method

Subjective evaluation of common singing skills using the rank ordering method lma Mater Studiorum University of ologna, ugust 22-26 2006 Subjective evaluation of common singing skills using the rank ordering method Tomoyasu Nakano Graduate School of Library, Information and Media

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2010, Sydney and Katoomba, Australia Practice makes less imperfect:

More information

Acoustic Prosodic Features In Sarcastic Utterances

Acoustic Prosodic Features In Sarcastic Utterances Acoustic Prosodic Features In Sarcastic Utterances Introduction: The main goal of this study is to determine if sarcasm can be detected through the analysis of prosodic cues or acoustic features automatically.

More information

Noise evaluation based on loudness-perception characteristics of older adults

Noise evaluation based on loudness-perception characteristics of older adults Noise evaluation based on loudness-perception characteristics of older adults Kenji KURAKATA 1 ; Tazu MIZUNAMI 2 National Institute of Advanced Industrial Science and Technology (AIST), Japan ABSTRACT

More information

EMPLOYMENT SERVICE. Professional Service Editorial Board Journal of Audiology & Otology. Journal of Music and Human Behavior

EMPLOYMENT SERVICE. Professional Service Editorial Board Journal of Audiology & Otology. Journal of Music and Human Behavior Kyung Myun Lee, Ph.D. Curriculum Vitae Assistant Professor School of Humanities and Social Sciences KAIST South Korea Korea Advanced Institute of Science and Technology Daehak-ro 291 Yuseong, Daejeon,

More information

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options PQM: A New Quantitative Tool for Evaluating Display Design Options Software, Electronics, and Mechanical Systems Laboratory 3M Optical Systems Division Jennifer F. Schumacher, John Van Derlofske, Brian

More information

Estimating the Time to Reach a Target Frequency in Singing

Estimating the Time to Reach a Target Frequency in Singing THE NEUROSCIENCES AND MUSIC III: DISORDERS AND PLASTICITY Estimating the Time to Reach a Target Frequency in Singing Sean Hutchins a and David Campbell b a Department of Psychology, McGill University,

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Facial expressions of singers influence perceived pitch relations. (Body of text + references: 4049 words) William Forde Thompson Macquarie University

Facial expressions of singers influence perceived pitch relations. (Body of text + references: 4049 words) William Forde Thompson Macquarie University Facial expressions of singers influence perceived pitch relations (Body of text + references: 4049 words) William Forde Thompson Macquarie University Frank A. Russo Ryerson University Steven R. Livingstone

More information

The Tone Height of Multiharmonic Sounds. Introduction

The Tone Height of Multiharmonic Sounds. Introduction Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,

More information

Speaking in Minor and Major Keys

Speaking in Minor and Major Keys Chapter 5 Speaking in Minor and Major Keys 5.1. Introduction 28 The prosodic phenomena discussed in the foregoing chapters were all instances of linguistic prosody. Prosody, however, also involves extra-linguistic

More information

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,

More information

Consonance perception of complex-tone dyads and chords

Consonance perception of complex-tone dyads and chords Downloaded from orbit.dtu.dk on: Nov 24, 28 Consonance perception of complex-tone dyads and chords Rasmussen, Marc; Santurette, Sébastien; MacDonald, Ewen Published in: Proceedings of Forum Acusticum Publication

More information

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space The Cocktail Party Effect Music 175: Time and Space Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) April 20, 2017 Cocktail Party Effect: ability to follow

More information

Why are natural sounds detected faster than pips?

Why are natural sounds detected faster than pips? Why are natural sounds detected faster than pips? Clara Suied Department of Physiology, Development and Neuroscience, Centre for the Neural Basis of Hearing, Downing Street, Cambridge CB2 3EG, United Kingdom

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail. Author(s): Vuoskoski, Jonna K.; Thompson, Marc; Spence, Charles; Clarke,

More information

Temporal coordination in string quartet performance

Temporal coordination in string quartet performance International Symposium on Performance Science ISBN 978-2-9601378-0-4 The Author 2013, Published by the AEC All rights reserved Temporal coordination in string quartet performance Renee Timmers 1, Satoshi

More information

THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS

THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS Anemone G. W. Van Zijl, Geoff Luck Department of Music, University of Jyväskylä, Finland Anemone.vanzijl@jyu.fi Abstract Very

More information

Influence of tonal context and timbral variation on perception of pitch

Influence of tonal context and timbral variation on perception of pitch Perception & Psychophysics 2002, 64 (2), 198-207 Influence of tonal context and timbral variation on perception of pitch CATHERINE M. WARRIER and ROBERT J. ZATORRE McGill University and Montreal Neurological

More information

Release from speech-on-speech masking in a front-and-back geometry

Release from speech-on-speech masking in a front-and-back geometry Release from speech-on-speech masking in a front-and-back geometry Neil L. Aaronson Department of Physics and Astronomy, Michigan State University, Biomedical and Physical Sciences Building, East Lansing,

More information

Visual Timing Sensitivity in a World Class Drum Corps:

Visual Timing Sensitivity in a World Class Drum Corps: Visual Timing Sensitivity in a World Class Drum Corps: Nestor Matthews Denison University Department of Psychology & Neuroscience Program Leslie Welch Brown University Cognitive, Linguistic & Psychological

More information

Modeling perceived relationships between melody, harmony, and key

Modeling perceived relationships between melody, harmony, and key Perception & Psychophysics 1993, 53 (1), 13-24 Modeling perceived relationships between melody, harmony, and key WILLIAM FORDE THOMPSON York University, Toronto, Ontario, Canada Perceptual relationships

More information

The quality of potato chip sounds and crispness impression

The quality of potato chip sounds and crispness impression PROCEEDINGS of the 22 nd International Congress on Acoustics Product Quality and Multimodal Interaction: Paper ICA2016-558 The quality of potato chip sounds and crispness impression M. Ercan Altinsoy Chair

More information

Kent Academic Repository

Kent Academic Repository Kent Academic Repository Full text document (pdf) Citation for published version Hall, Damien J. (2006) How do they do it? The difference between singing and speaking in female altos. Penn Working Papers

More information

The presence of multiple sound sources is a routine occurrence

The presence of multiple sound sources is a routine occurrence Spectral completion of partially masked sounds Josh H. McDermott* and Andrew J. Oxenham Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Road, Minneapolis, MN 55455-0344

More information

Modeling sound quality from psychoacoustic measures

Modeling sound quality from psychoacoustic measures Modeling sound quality from psychoacoustic measures Lena SCHELL-MAJOOR 1 ; Jan RENNIES 2 ; Stephan D. EWERT 3 ; Birger KOLLMEIER 4 1,2,4 Fraunhofer IDMT, Hör-, Sprach- und Audiotechnologie & Cluster of

More information

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. Pitch The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. 1 The bottom line Pitch perception involves the integration of spectral (place)

More information

Radiating beauty" in Japan also?

Radiating beauty in Japan also? Jupdnese Psychological Reseurch 1990, Vol.32, No.3, 148-153 Short Report Physical attractiveness and its halo effects on a partner: Radiating beauty" in Japan also? TAKANTOSHI ONODERA Psychology Course,

More information

Experiments on tone adjustments

Experiments on tone adjustments Experiments on tone adjustments Jesko L. VERHEY 1 ; Jan HOTS 2 1 University of Magdeburg, Germany ABSTRACT Many technical sounds contain tonal components originating from rotating parts, such as electric

More information

Supervision of Analogue Signal Paths in Legacy Media Migration Processes using Digital Signal Processing

Supervision of Analogue Signal Paths in Legacy Media Migration Processes using Digital Signal Processing Welcome Supervision of Analogue Signal Paths in Legacy Media Migration Processes using Digital Signal Processing Jörg Houpert Cube-Tec International Oslo, Norway 4th May, 2010 Joint Technical Symposium

More information

Aalborg Universitet. The influence of Body Morphology on Preferred Dance Tempos. Dahl, Sofia; Huron, David

Aalborg Universitet. The influence of Body Morphology on Preferred Dance Tempos. Dahl, Sofia; Huron, David Aalborg Universitet The influence of Body Morphology on Preferred Dance Tempos. Dahl, Sofia; Huron, David Published in: international Computer Music Conference -ICMC07 Publication date: 2007 Document

More information

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound Pitch Perception and Grouping HST.723 Neural Coding and Perception of Sound Pitch Perception. I. Pure Tones The pitch of a pure tone is strongly related to the tone s frequency, although there are small

More information

The Human Features of Music.

The Human Features of Music. The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,

More information

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,

More information

Temporal control mechanism of repetitive tapping with simple rhythmic patterns

Temporal control mechanism of repetitive tapping with simple rhythmic patterns PAPER Temporal control mechanism of repetitive tapping with simple rhythmic patterns Masahi Yamada 1 and Shiro Yonera 2 1 Department of Musicology, Osaka University of Arts, Higashiyama, Kanan-cho, Minamikawachi-gun,

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

Speech and Speaker Recognition for the Command of an Industrial Robot

Speech and Speaker Recognition for the Command of an Industrial Robot Speech and Speaker Recognition for the Command of an Industrial Robot CLAUDIA MOISA*, HELGA SILAGHI*, ANDREI SILAGHI** *Dept. of Electric Drives and Automation University of Oradea University Street, nr.

More information

Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs

Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs Sebastian Merchel and Ercan Altinsoy Chair of Communication Acoustics, Dresden University of Technology, Germany sebastian.merchel@tu-dresden.de

More information

Chapter Five: The Elements of Music

Chapter Five: The Elements of Music Chapter Five: The Elements of Music What Students Should Know and Be Able to Do in the Arts Education Reform, Standards, and the Arts Summary Statement to the National Standards - http://www.menc.org/publication/books/summary.html

More information

What is music as a cognitive ability?

What is music as a cognitive ability? What is music as a cognitive ability? The musical intuitions, conscious and unconscious, of a listener who is experienced in a musical idiom. Ability to organize and make coherent the surface patterns

More information

Tinnitus Quick Guide

Tinnitus Quick Guide Tinnitus Quick Guide MADSEN Astera² offers a new module for tinnitus assessment. This new module is available free of charge in OTOsuite versions 4.65 and higher. Its objective is to assist clinicians

More information

Beltone True TM with Tinnitus Breaker Pro

Beltone True TM with Tinnitus Breaker Pro Beltone True TM with Tinnitus Breaker Pro Beltone True Tinnitus Breaker Pro tinnitus datasheet How to use tinnitus test results It is important to remember that tinnitus is a symptom, not a disease. It

More information

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music

More information

2. Measurements of the sound levels of CMs as well as those of the programs

2. Measurements of the sound levels of CMs as well as those of the programs Quantitative Evaluations of Sounds of TV Advertisements Relative to Those of the Adjacent Programs Eiichi Miyasaka 1, Yasuhiro Iwasaki 2 1. Introduction In Japan, the terrestrial analogue broadcasting

More information

MASTER'S THESIS. Listener Envelopment

MASTER'S THESIS. Listener Envelopment MASTER'S THESIS 2008:095 Listener Envelopment Effects of changing the sidewall material in a model of an existing concert hall Dan Nyberg Luleå University of Technology Master thesis Audio Technology Department

More information

Psychoacoustics. lecturer:

Psychoacoustics. lecturer: Psychoacoustics lecturer: stephan.werner@tu-ilmenau.de Block Diagram of a Perceptual Audio Encoder loudness critical bands masking: frequency domain time domain binaural cues (overview) Source: Brandenburg,

More information

SOUND LABORATORY LING123: SOUND AND COMMUNICATION

SOUND LABORATORY LING123: SOUND AND COMMUNICATION SOUND LABORATORY LING123: SOUND AND COMMUNICATION In this assignment you will be using the Praat program to analyze two recordings: (1) the advertisement call of the North American bullfrog; and (2) the

More information

Hearing Research 219 (2006) Research paper. Influence of musical and psychoacoustical training on pitch discrimination

Hearing Research 219 (2006) Research paper. Influence of musical and psychoacoustical training on pitch discrimination Hearing Research 219 (2006) 36 47 Research paper Influence of musical and psychoacoustical training on pitch discrimination Christophe Micheyl a, *, Karine Delhommeau b,c, Xavier Perrot d, Andrew J. Oxenham

More information

gresearch Focus Cognitive Sciences

gresearch Focus Cognitive Sciences Learning about Music Cognition by Asking MIR Questions Sebastian Stober August 12, 2016 CogMIR, New York City sstober@uni-potsdam.de http://www.uni-potsdam.de/mlcog/ MLC g Machine Learning in Cognitive

More information

Behavioral and neural identification of birdsong under several masking conditions

Behavioral and neural identification of birdsong under several masking conditions Behavioral and neural identification of birdsong under several masking conditions Barbara G. Shinn-Cunningham 1, Virginia Best 1, Micheal L. Dent 2, Frederick J. Gallun 1, Elizabeth M. McClaine 2, Rajiv

More information

MODIFICATIONS TO THE POWER FUNCTION FOR LOUDNESS

MODIFICATIONS TO THE POWER FUNCTION FOR LOUDNESS MODIFICATIONS TO THE POWER FUNCTION FOR LOUDNESS Søren uus 1,2 and Mary Florentine 1,3 1 Institute for Hearing, Speech, and Language 2 Communications and Digital Signal Processing Center, ECE Dept. (440

More information

Pitch Perception. Roger Shepard

Pitch Perception. Roger Shepard Pitch Perception Roger Shepard Pitch Perception Ecological signals are complex not simple sine tones and not always periodic. Just noticeable difference (Fechner) JND, is the minimal physical change detectable

More information

Effect of room acoustic conditions on masking efficiency

Effect of room acoustic conditions on masking efficiency Effect of room acoustic conditions on masking efficiency Hyojin Lee a, Graduate school, The University of Tokyo Komaba 4-6-1, Meguro-ku, Tokyo, 153-855, JAPAN Kanako Ueno b, Meiji University, JAPAN Higasimita

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION

TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION TOWARDS IMPROVING ONSET DETECTION ACCURACY IN NON- PERCUSSIVE SOUNDS USING MULTIMODAL FUSION Jordan Hochenbaum 1,2 New Zealand School of Music 1 PO Box 2332 Wellington 6140, New Zealand hochenjord@myvuw.ac.nz

More information

The Power of Listening

The Power of Listening The Power of Listening Auditory-Motor Interactions in Musical Training AMIR LAHAV, a,b ADAM BOULANGER, c GOTTFRIED SCHLAUG, b AND ELLIOT SALTZMAN a,d a The Music, Mind and Motion Lab, Sargent College of

More information

Spatial-frequency masking with briefly pulsed patterns

Spatial-frequency masking with briefly pulsed patterns Perception, 1978, volume 7, pages 161-166 Spatial-frequency masking with briefly pulsed patterns Gordon E Legge Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, USA Michael

More information

LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS

LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS SONG HUI CHON 1, DOYUEN KO 2, SUNGYOUNG KIM 3 1 School of Music, Ohio State University, Columbus, Ohio, USA chon.21@osu.edu

More information

With thanks to Seana Coulson and Katherine De Long!

With thanks to Seana Coulson and Katherine De Long! Event Related Potentials (ERPs): A window onto the timing of cognition Kim Sweeney COGS1- Introduction to Cognitive Science November 19, 2009 With thanks to Seana Coulson and Katherine De Long! Overview

More information

Speech Recognition and Signal Processing for Broadcast News Transcription

Speech Recognition and Signal Processing for Broadcast News Transcription 2.2.1 Speech Recognition and Signal Processing for Broadcast News Transcription Continued research and development of a broadcast news speech transcription system has been promoted. Universities and researchers

More information

ANALYSING DIFFERENCES BETWEEN THE INPUT IMPEDANCES OF FIVE CLARINETS OF DIFFERENT MAKES

ANALYSING DIFFERENCES BETWEEN THE INPUT IMPEDANCES OF FIVE CLARINETS OF DIFFERENT MAKES ANALYSING DIFFERENCES BETWEEN THE INPUT IMPEDANCES OF FIVE CLARINETS OF DIFFERENT MAKES P Kowal Acoustics Research Group, Open University D Sharp Acoustics Research Group, Open University S Taherzadeh

More information

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS Areti Andreopoulou Music and Audio Research Laboratory New York University, New York, USA aa1510@nyu.edu Morwaread Farbood

More information