Music BCI ( )

Similar documents
Brain-Computer Interface (BCI)

THE BERGEN EEG-fMRI TOOLBOX. Gradient fmri Artifatcs Remover Plugin for EEGLAB 1- INTRODUCTION

Common Spatial Patterns 3 class BCI V Copyright 2012 g.tec medical engineering GmbH

Common Spatial Patterns 2 class BCI V Copyright 2012 g.tec medical engineering GmbH

Aalborg Universitet. Published in: Journal of Neural Engineering. DOI (link to publication from Publisher): / /11/2/026009

Topics in Computer Music Instrument Identification. Ioanna Karydi

MUSI-6201 Computational Music Analysis

KS3 Music. Curriculum Map

MUSIC CURRICULM MAP: KEY STAGE THREE:

HST 725 Music Perception & Cognition Assignment #1 =================================================================

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Music Genre Classification and Variance Comparison on Number of Genres

Pre-processing pipeline

Deep learning for music data processing

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education

DATA! NOW WHAT? Preparing your ERP data for analysis

Pre-Processing of ERP Data. Peter J. Molfese, Ph.D. Yale University

A 5 Hz limit for the detection of temporal synchrony in vision

Edit Menu. To Change a Parameter Place the cursor below the parameter field. Rotate the Data Entry Control to change the parameter value.

MUSIC PERFORMANCE: GROUP

Standard 1: Singing, alone and with others, a varied repertoire of music

The MPC X & MPC Live Bible 1

Influence of tonal context and timbral variation on perception of pitch

Articulation Clarity and distinct rendition in musical performance.

Proceedings of Meetings on Acoustics

The song remains the same: identifying versions of the same piece using tonal descriptors

Banff Sketches. for MIDI piano and interactive music system Robert Rowe

Chord Classification of an Audio Signal using Artificial Neural Network

Experimenting with Musically Motivated Convolutional Neural Networks

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

LEVELS IN NATIONAL CURRICULUM MUSIC

LEVELS IN NATIONAL CURRICULUM MUSIC

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

Singer Traits Identification using Deep Neural Network

Feature Conditioning Based on DWT Sub-Bands Selection on Proposed Channels in BCI Speller

HBI Database. Version 2 (User Manual)

Curriculum Overview Music Year 9

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

MEMORY & TIMBRE MEMT 463

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

The purpose of this essay is to impart a basic vocabulary that you and your fellow

In this project you will learn how to code a live music performance, that you can add to and edit without having to stop the music!

The Tone Height of Multiharmonic Sounds. Introduction

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

SMARTING SMART, RELIABLE, SIMPLE

La Salle University. I. Listening Answer the following questions about the various works we have listened to in the course so far.

Topic 10. Multi-pitch Analysis

Automatic Rhythmic Notation from Single Voice Audio Sources

gresearch Focus Cognitive Sciences

Visual Arts, Music, Dance, and Theater Personal Curriculum

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

Student Leadership. Music Product Competition!

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

Music Genre Classification

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Acoustic and musical foundations of the speech/song illusion

Connecticut State Department of Education Music Standards Middle School Grades 6-8

Features for Audio and Music Classification

Music, Grade 9, Open (AMU1O)

BRAIN BEATS: TEMPO EXTRACTION FROM EEG DATA

Grade Level 5-12 Subject Area: Vocal and Instrumental Music

Department Curriculum Map

The Pines of the Appian Way from Respighi s Pines of Rome. Ottorino Respighi was an Italian composer from the early 20 th century who wrote

Music Source Separation

A Computational Model for Discriminating Music Performers

Montana Instructional Alignment HPS Critical Competencies Music Grade 3

Robert Alexandru Dobre, Cristian Negrescu

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

ISCEV SINGLE CHANNEL ERG PROTOCOL DESIGN

Pitfalls and Windfalls in Corpus Studies of Pop/Rock Music

arxiv: v1 [cs.sd] 8 Jun 2016

Teaching Music with ipads CPD

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

Voice & Music Pattern Extraction: A Review

TOWARD UNDERSTANDING EXPRESSIVE PERCUSSION THROUGH CONTENT BASED ANALYSIS

Automatic Music Clustering using Audio Attributes

Outline. Why do we classify? Audio Classification

Music Curriculum Map Year 5

Lab #10 Perception of Rhythm and Timing

Supplemental Information. Dynamic Theta Networks in the Human Medial. Temporal Lobe Support Episodic Memory

Stimulus presentation using Matlab and Visage

Music at Menston Primary School

Rachel Hocking Assignment Music 2Y Student No Music 1 - Music for Small Ensembles

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

Temporal Envelope and Periodicity Cues on Musical Pitch Discrimination with Acoustic Simulation of Cochlear Implant

ARTICLE IN PRESS. Neuroscience Letters xxx (2014) xxx xxx. Contents lists available at ScienceDirect. Neuroscience Letters

NENS 230 Assignment #2 Data Import, Manipulation, and Basic Plotting

10 Visualization of Tonal Content in the Symbolic and Audio Domains

CHAPTER 14: MODERN JAZZ TECHNIQUES IN THE PRELUDES. music bears the unmistakable influence of contemporary American jazz and rock.

On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT

Davis Senior High School Symphonic Band Audition Information

transcends any direct musical culture. 1 Then there are bands, like would be Reunion from the Live at Blue Note Tokyo recording 2.

Building a Better Bach with Markov Chains

A Psychoacoustically Motivated Technique for the Automatic Transcription of Chords from Musical Audio

The Keyboard. Introduction to J9soundadvice KS3 Introduction to the Keyboard. Relevant KS3 Level descriptors; Tasks.

Speech To Song Classification

INSTRUCTIONS TO CANDIDATES

Transcription:

Music BCI (006-2015) Matthias Treder, Benjamin Blankertz Technische Universität Berlin, Berlin, Germany September 5, 2016 1 Introduction We investigated the suitability of musical stimuli for use in a P300 paradigm. To this end, 11 subjects listened to polyphonic music clips featuring three instruments playing together. We devised a multi-streamed oddball paradigm, with each of the 3 instruments playing a repetitive standard musical pattern, interspersed with a randomly occurring deviant musical pattern. Subjects were cued to attend to one particular instrument and ignore the other two. Using regularised linear discriminant analysis, we were able to differentiate between deviants in the attended and deviants in the unattended instruments. For detailed information on the experiment and the method, refer to [1]. In a further study [2], we analysed the neural representation of tone onsets in the same data using a spatio-temporal filtering approach. 2 Experimental paradigm The experiment had four different experimental conditions featuring different types of music clips. Before each clip, one out of three possible instruments was cued. The subject had to attend to the instrument and mentally count the number of deviants for the cued instrument. After the end of the clip, the subject had to type in the count. SynthPop: A minimalistic adaptation of Just can t get enough by the Synth-Pop band Depeche Mode. A corresponding sample score is depicted in figure 1. It features three instruments: drums consisting of kick drum, snare and hi-hat, a synthetic bass and a keyboard equipped with a synthetic piano sound. The instruments play an adaptation of the chorus of the original song with the keyboard featuring the main melody of the song. Deviants are defined as follows: for the drums, the kick drum on the first quarter note is replaced by eight notes featuring snare and then kick drum; for the bass, the whole 4-tone standard sequence is transposed up by five semitones; for the keyboard, tones 4 6 of the 8-tone standard sequence are transposed. The relative loudness of the instruments has been 1

set by one of the authors such that all instruments are roughly equally audible. Panning: none (all instruments panned to center). Beats-per-minute: 130. Jazz: Stylistically, the Jazz clips are located half-way between a minimalistic piece by Philip Glass and a jazz trio comprising double-bass, piano and flute. Each of the three voices is generated through frequent repetition of a standard pattern composed of 3 5 tones, once in a while replaced by a deviant pattern that differs from the standard pattern in one note. One clip consists of three overlaid voices. The Jazz music clips differ from the SynthPop clips in various ways. The Jazz clips sound more natural. This is achieved by selecting samples of acoustic instruments. In addition, loudness and micro-timing are manually adjusted for each tone of the basic pattern in order to make the entire phrase sound more musical. Apart from timbre (double-bass, piano, flute) and pitch range (low, medium, high), for the Jazz clips, another parameter is use to make the voices independent from each other. Each voice consists of patterns of different length, namely of 3, 4 and 5 beats per pattern. Through rhythmical interference a polymetrical rhythmical texture is generated. For better separation of the musical instruments, also panning is chosen to locate musical instruments in different directions from the user. This independence of the different voices is aimed at helping the user to focus on one particular instrument. The relative loudness of the instruments has been set by one of the authors such that deviants in all instruments are roughly equally audible. In particular, the double-bass had to be amplified, while the flute was turned down. Panning: flute left, bass central, piano right. Beats-per-minute: 120 SynthPop Solo: Solo versions of the SynthPop music clips with one of the instruments playing in isolation. Solo versions have been produced for all instruments. Jazz Solo: Solo versions of the Jazz music clips with one of the instruments playing in isolation. Solo versions have been produced for all instruments. For each music condition, ten different music clips were created with variable amounts and different positions of the deviants in each instrument. Additionally, we exported solo versions with each of the instruments playing in isolation. Sample stimuli are provided. 3 Data For details on data pre-processing, refer to the papers in the reference list. The data has been pre-processed with the BBCI Matlab toolbox (https://github. 2

Figure 1: Extract of score for SynthPop stimulus showing the three instruments. The deviant events are marked by red boxes. Event markers for deviant events designate the start of the deviant pattern. Figure 2: Structuring of the experiment. Each experiment consists of 8 runs. Upper panel: Each run features a particular experimental condition and consists of a number of music clips (indicated by the blue traces). Lower panel: Each music clip is preceded by cue and fixation cross markers. There are many event triggers for attended and unattended standard tones and deviants while the music clip is being played. After the music clip has finished, the subject types in the deviant count. 3

com/bbci/bbci_public/). For an introduction to the toolbox, refer to https: //github.com/bbci/bbci_public/blob/master/doc/index.markdown. Note that the data can easily be transformed into FieldTrip, EEGLab, and similar data types, although this needs to be done manually as currently there are no automated scripts for this. 4 General description of Matlab structs data contains the EEG data. mrk contains timing information about the events (i.e. markers/triggers). mnt contains details about the montage useful for plotting including channel positions in 2D and 3D. data mrk X: matrix of size [number of time points number of channels]. Contains the continuous EEG data. fs: Sampling frequency in Hz. clab: Cell array with channel labels. trial: time of events in samples relative to the start of the measurement. Can be transformed into milliseconds by data.trial/data.fs * 1000. classes: Cell array with labels for the different types of events. y: Vector of the same length as trial indicating, for each trial, which class it belongs to. For instance, if the first 3 entries of y are 1, 3, 2, then the first trial belongs to class 1, the second trial belongs to class 3, and the third trial belongs to class 2. In rare occasions, if an entry is 0, it means there is no trial information for this particular event and it should probably be discarded. time: time of events in milliseconds relative to the start of the measurement. y: logical matrix of size [number of classes number of events]. It contains the same information as data.y, but the format is different. Each row of mrk.y corresponds to one experimental condition where 1 s indicate that the corresponding trial belongs to this condition. Indices correspond to the indices in the data struct, that is for the k-th class, find(mrk.y(k,:)) and find(data.y == k) give equal results. classname: same as data.classes event: The event field contains additional information for each of the events 4

The mrk structure usually contains further sub-structures which have the same structure as the mrk struct. These substructures usually carry additional information taken from additional event triggers that were recorded. mnt x: x-position of electrode for 2D plotting y: y-position of electrode for 2D plotting pos 3d: 3D-position of electrode for 3D plotting 5 Selecting events from data If the data contains different classes/experimental conditions, the event indices corresponding to the first class can be selected using find(data.y == 1). Then, find(data.y == 2) yields the event indices for the second class, and so on. The indices can be saved in a variable and then used to obtain the onset times of trials corresponding to the selected class. The onsets can then be used for epoching the data. Example code: idx1 = find(data.y == 1); trialonsetsclass1 = data.trial(idx1); 6 Additional information contained in the mrk struct mrk.event The event field contains additional information for each of the events described in the y and time fields. In the present dataset, the following additional information is available: condition: contains the numbers 0, 1, 2, 3, denoting the experimental condition. (0 = SynthPop, 1 = Jazz, 2 = SynthPop Solo, 3 = Jazz Solo). instrument: contains the numbers 1, 2, 3 denoting which of the three instruments the onset time corresponds to. In the SynthPop condition, the mapping is 1 = Drum, 2 = Bass, 3 = Keyboard. In the Jazz condition, the mapping is 1 = Flute, 2 = Bass, 3 = Piano. deviant: denotes whether the event corresponds to a standard or a deviant tone (0 = standard stimulus, 1 = deviant stimulus). mrk.cue: Onset of the cue before the start of each music clip. The cue designates which instrument the subject has to attend to. Refer to mrk.cue.classname for name of the cued instrument. Note that the class names correspond to the SynthPop condition. In the Jazz condition, drums corresponds to the flute, and keyboard corresponds to the piano. 5

mrk.clip: Onset of the music clips. For each condition, 10 different music clips were created. The classes indicate the number of the clip file. mrk.misc: Miscellaneous events. run start: Onset of a run. There should be a total of 10 runs, each run featuring one experimental condition (SynthPop, Jazz, SynthPop Solo, or Jazz Solo). Runs 3, 4, 7, and 8 were designated as solo runs. run end: End of run. For some subjects, runs have not been completed due to technical problems. In this case, the run might have been restarted. To use only complete runs, compare the run start and run end onset times (unfinished runs should not have a corresponding run end event). trial start: not used. trial end: not used. fixation start: Onset of the fixation cross that subjects were instructed to fixate throughout the duration of the music clip. clip start: Onset of music clip. These events are the same as the events in mrk.clip, but the information about the number of the music clip was dropped. clip end: Offset of music clip. Sometimes clips might not have been completed due to technical problems. To use only complete clips, compare the clip start and clip end onset times (unfinished clips should not have a corresponding clip end event). condition depmod: Start of run in the SynthPop condition. condition hendrik: Start of run in the Jazz condition. solo condition depmod: Start of run in the SynthPop Solo condition. solo condition hendrik: Start of run in the Jazz Solo condition. mrk.resp: Response of the subject. The count vector contains the number of deviants counted by the subject during each music clip. The length of the vector should correspond to the number of completed music clips (compare to clip end in mrk.misc). 7 Selecting events using the mrk struct 7.1 Selecting an experimental condition The experimental condition corresponding to an event is saved in mrk.event.condition (see above for a specification). Example code for selecting only events belonging to the polyphonic SynthPop condition: idx = find(mrk.event.condition == 1); data.trial = data.trial(idx); data.y = data.y(idx); 6

7.2 Selecting an instrument The instrument corresponding to an event is saved in mrk.event.instrument (see above for a specification). Example code for selecting only events corresponding to the Keyboard: idx keyboard = find(mrk.event.instrument == 3); data.trial = data.trial(idx keyboard); data.y = data.y(idx keyboard); 7.3 Selecting all deviants (attended and unattended) Indices of only deviant stimuli can be selected by idx deviant = find(data.y == 1 data.y == 2). In [1], we selected only deviant stimuli and then trained a classifier to classify between deviants in the attended instrument versus deviants in the unattended instrument. References [1] Matthias Sebastian Treder, Hendrik Purwins, Daniel Miklody, Irene Sturm, and Benjamin Blankertz. Decoding auditory attention to instruments in polyphonic music using singletrial EEG classification. J Neural Eng, 11:026009, 2014. [2] Irene Sturm, Matthias Sebastian Treder, Daniel Miklody, Hendrik Purwins, Sven Dähne, Benjamin Blankertz, and Gabriel Curio. Extracting the neural representation of tone onsets for separate voices of ensemble music using multivariate eeg analysis. Psychomusicology: Music, Mind, and Brain, 25:366 379, 2015. 7