SIEMPRE. D3.3 SIEMPRE and SIEMPRE-INCO extension Final version of techniques for data acquisition and multimodal analysis of emap signals

Similar documents
SIEMPRE D3.1 Techniques for data acquisition and multimodal analysis of emap signals.

SIEMPRE SIEMPRE. First series of experiments FIRST SERIES OF EXPERIMENT D2.1 DISSEMINATION LEVEL: PUBLIC

SIEMPRE First series of experiments

OBSERVED DIFFERENCES IN RHYTHM BETWEEN PERFORMANCES OF CLASSICAL AND JAZZ VIOLIN STUDENTS

Multidimensional analysis of interdependence in a string quartet

Muscle Sensor KI 2 Instructions

Measurement of Motion and Emotion during Musical Performance

Hidden melody in music playing motion: Music recording using optical motion tracking system

Lesson 14 BIOFEEDBACK Relaxation and Arousal

Research-Grade Research-Grade. Capture

Re: ENSC 370 Project Physiological Signal Data Logger Functional Specifications

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time

TIME-COMPENSATED REMOTE PRODUCTION OVER IP

Major Differences Between the DT9847 Series Modules

Tone Insertion To Indicate Timing Or Location Information

Automatic music transcription

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator

PROBABILISTIC MODELING OF BOWING GESTURES FOR GESTURE-BASED VIOLIN SOUND SYNTHESIS

Lab experience 1: Introduction to LabView

About... D 3 Technology TM.

Acoustic Measurements Using Common Computer Accessories: Do Try This at Home. Dale H. Litwhiler, Terrance D. Lovell

HDV for LIVE MC-10AD. The Best Gear for HDV Camcorder LIVE Applications. HD/ SD A D C onve rte r

Speech Recognition and Signal Processing for Broadcast News Transcription

MTL Software. Overview

Music in Practice SAS 2015

When to use External Trigger vs. External Clock

B I O E N / Biological Signals & Data Acquisition

Brain-Computer Interface (BCI)

Topics in Computer Music Instrument Identification. Ioanna Karydi

D-Lab & D-Lab Control Plan. Measure. Analyse. User Manual

Introduction to GRIP. The GRIP user interface consists of 4 parts:

COMPARED IMPROVEMENT BY TIME, SPACE AND FREQUENCY DATA PROCESSING OF THE PERFORMANCES OF IR CAMERAS. APPLICATION TO ELECTROMAGNETISM

IP Telephony and Some Factors that Influence Speech Quality

QC External Synchronization (SYN) S32

Virtual Piano. Proposal By: Lisa Liu Sheldon Trotman. November 5, ~ 1 ~ Project Proposal

WaveDriver 20 Potentiostat/Galvanostat System

Temporal coordination in string quartet performance

THE "CONDUCTOR'S JACKET": A DEVICE FOR RECORDING EXPRESSIVE MUSICAL GESTURES

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04S 7/00 ( ) H04R 25/00 (2006.

TEPZZ 94 98_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2015/46

Getting Started. Connect green audio output of SpikerBox/SpikerShield using green cable to your headphones input on iphone/ipad.

MCS-8M Multi-format Compact Switcher

CDMA2000 1xRTT / 1xEV-DO Measurement of time relationship between CDMA RF signal and PP2S clock

BioGraph Infiniti Physiology Suite

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.

XC-77 (EIA), XC-77CE (CCIR)

Multi-Frame Matrix Capture Common File Format (MFMC- CFF) Requirements Capture

Request for Technology Fee Funds A separate request should be made for each initiative.

Natural Radio. News, Comments and Letters About Natural Radio January 2003 Copyright 2003 by Mark S. Karney

PRODUCT BROCHURE. Gemini Matrix Intercom System. Mentor RG + MasterMind Sync and Test Pulse Generator

VIRTUAL INSTRUMENTATION

Application Note PG001: Using 36-Channel Logic Analyzer and 36-Channel Digital Pattern Generator for testing a 32-Bit ALU

Portable USB Potentiostat Low-Current Portable USB Potentiostat Extended Voltage USB Potentiostat

Interacting with a Virtual Conductor

KM-H Series. Multi-format digital production switchers KM-H3000E KM-H3000U KM-H2500E KM-H2500U

DSA-1. The Prism Sound DSA-1 is a hand-held AES/EBU Signal Analyzer and Generator.

PRODUCT BROCHURE. Broadcast Solutions. Gemini Matrix Intercom System. Mentor RG + MasterMind Sync and Test Pulse Generator

Cambridge International Examinations Cambridge International Advanced Subsidiary and Advanced Level

How to use the DC Live/Forensics Dynamic Spectral Subtraction (DSS ) Filter

FPGA Development for Radar, Radio-Astronomy and Communications

SMARTING SMART, RELIABLE, SIMPLE

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR NPTEL ONLINE CERTIFICATION COURSE. On Industrial Automation and Control

An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR

Acoustics H-HLT. The study programme. Upon completion of the study! The arrangement of the study programme. Admission requirements

Oculomatic Pro. Setup and User Guide. 4/19/ rev

Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments

Promotion Package Pricing

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Audio Video Broadcasting at Bethlehem Lutheran Church

Image Processing Using MATLAB (Summer Training Program) 6 Weeks/ 45 Days PRESENTED BY

Cisco StadiumVision Defining Channels and Channel Guides in SV Director

Lip Sync of Audio/Video Distribution and Display

IP LIVE PRODUCTION UNIT NXL-IP55

Multimodal databases at KTH

Written Progress Report. Automated High Beam System

EDL8 Race Dash Manual Engine Management Systems

DS-7204/7208/7216HVI-ST Series DVR Technical Manual

Music BCI ( )

Witold MICKIEWICZ, Jakub JELEŃ

6.111 Final Project Proposal Kelly Snyder and Rebecca Greene. Abstract

Standard Definition. Commercial File Delivery. Technical Specifications

2017 MICHIGAN SKILLS USA CHAMPIONSHIPS TASK AND MATERIALS LIST. SKILL OR LEADERSHIP AREA: Television Video Production

Ben Neill and Bill Jones - Posthorn

Parade Application. Overview

ACTIVE SOUND DESIGN: VACUUM CLEANER

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

Embodied music cognition and mediation technology

(Refer Slide Time 1:58)

THE DIGITAL DELAY ADVANTAGE A guide to using Digital Delays. Synchronize loudspeakers Eliminate comb filter distortion Align acoustic image.

Real-time composition of image and sound in the (re)habilitation of children with special needs: a case study of a child with cerebral palsy

MCS-8M Compact Audio Video Mixing Switcher

DT3162. Ideal Applications Machine Vision Medical Imaging/Diagnostics Scientific Imaging

DISCOVERING THE POWER OF METADATA

Audio (providing, fixing complete including all accessories)

42Percent Noir - Animation by Pianist

OL_H264e HDTV H.264/AVC Baseline Video Encoder Rev 1.0. General Description. Applications. Features

ITU-T Y Specific requirements and capabilities of the Internet of things for big data

Communication Lab. Assignment On. Bi-Phase Code and Integrate-and-Dump (DC 7) MSc Telecommunications and Computer Networks Engineering

CLASSROOM ACOUSTICS OF MCNEESE STATE UNIVER- SITY

AWS-750. Anycast Touch portable live content producer. Overview

Transcription:

D3.3 SIEMPRE AND SIEMPRE-INCO EXTENSION FINAL VERSION OF TECHNIQUES FOR DATA ACQUISITION AND MULTIMODAL ANALYSIS OF EMAP SIGNALS DISSEMINATION LEVEL: PUBLIC Social Interaction and Entrainment using Music PeRformancE SIEMPRE D3.3 SIEMPRE and SIEMPRE-INCO extension Final version of techniques for data acquisition and multimodal analysis of emap signals Version Edited by Changes 1 UPF 2 VT 3 UNIGE 4 IIT 5 QUB 6 UNIGE-CH

Table of Contents 1. Introduction... 3 2. Contribution BY UNIGE... 3 2.1 Pickup Audio... 3 2.2 Binaural Audio... 4 2.3 Wireless motion capture... 4 2.4 Video... 4 3. Contribution BY UNIVERSITAT POMPEU FABRA... 5 3.1 Pickup Audio... 5 3.2 Cardioid Audio... 6 3.3 Binaural Audio... 6 3.4 Wired motion capture... 7 3.5 Wireless motion capture... 7 3.6 Video... 7 4. Contribution BY ITALIAN INSTITUTE OF TECHNOLOGY... 8 4.1 Electromyography... 8 4.2 Thermography (also involving work by UNIGE- CH)... 8 5. Contribution BY QUEENS UNIVERSITY BELFAST... 9 5.1 Quality of Experience questionnaire... 9 5.2 Continuous self- report measure... 10 5.3 Physiological measures... 12 5.4 Motion Capture... 14 5.5 Audio and Video... 15 6. Contribution BY VIRGINIA TECH (ALSO INVOLVING WORK BY STANFORD UNIV.)... 16 6.1 Mobile sensor system... 16 7. References... 17

1. INTRODUCTION This deliverable (D3.3) describes the techniques for data acquisition and multimodal analysis of emap signals exploited and developed through the SIEMPRE and SIEMPRE-INCO Extension Projects. A major work in SIEMPRE was to get synchronized recording and playback of multimodal signals. Description of techniques is carried out separately by SIEMPRE partner, following a structure dictated by the following scheme: - SIGNAL OR TECHNIQUE: Here, the corresponding partner provides the name or identifier of the emap signal or group of signals, or the analysis technique. - A description of the signal or technique (including relevant references to previous documents or publications) is provided in this point. - SCENARIO(S): Which of the SIEMPRE scenarios involves this signal or technique? - Description of the methods (a) used for the acquisition/processing of the emap signal; or (b) involved in the development of the technique. - DEVICES USED: Which devices are used? - SYNCHRONIZATION: Whether there any specific synchronization technique is specifically required in order to combine described signals with other emap signals, partners provide relevant information here. - EXAMPLE(S): If there are any examples of usage, or data in the SIEMPRE repository, partners state this here. 2. CONTRIBUTION BY UNIGE 2.1 Pickup Audio Audio (pickup) Monophonic audio signal acquired individually for each musician via a pickup microphone. Piezoelectric transducer fitted on the bridge of the musical instrument Shadow SH SV1, Fishman C-100, Fishman V-100, Fishman V-200 SYNCHRONIZATION: Synchronized during acquisition by locking the recording audio card to the SMPTE signal. In SIEMPRE repository

2.2 Binaural Audio Audio (binaural) Stereophonic audio signal acquired for the whole quartet. Spaced pair placed in front of the string quartet. Neumann KM 184 condenser cardioid microphone SYNCHRONIZATION: Synchronized during acquisition by locking the recording audio card to the SMPTE signal. In SIEMPRE repository 2.3 Wireless motion capture Wireless MoCap Wireless 3DOF markers placed on the bodies of the musicians as well as the musical instruments. IR-reflective markers attached on the musicians bodie,. IR-reflective markers attached on the instrument body and bow. Qualisys Camera Series SYNCHRONIZATION: Synchronized with the Qualisys MoCap system using a world clock generator as well as SMPTE linear timecode. In SIEMPRE repository 2.4 Video Video Digital video signal of the quartet during performance. Two digital video camera, one placed in front of the string quartet and one oriented towards first violinist of the quartet. High Definition JVC cameras GY-HD251 camcorder SYNCHRONIZATION: Synchronized by a genlock blackburst signal coming from the world clock generator, and time-stamped with the SMPTE signal.

In SIEMPRE repository 3. CONTRIBUTION BY UNIVERSITAT POMPEU FABRA 3.1 Pickup Audio Audio (pickup)

Monophonic audio signal acquired individually for each musician via a pickup microphone. Piezoelectric transducer fitted on the bridge of the musical instrument Fishman V-100 (violin, viola) and Fishman C-100 (cello) SYNCHRONIZATION: Synchronized during acquisition with the Polhemus Motion Capture data using an analog audio pulse signal, and as post-processing with SMPTE linear time code and a wordclock generated signal. In SIEMPRE repository 3.2 Cardioid Audio Audio (cardioid) Monophonic audio signal acquired for the whole quartet via a large diaphragm cardioid pattern condenser microphone. Large diaphragm condenser microphone placed in front of the string quartet. AKG Perception 120 SYNCHRONIZATION: Synchronized during acquisition with the Polhemus Motion Capture data using an analog audio pulse signal, and as post-processing with SMPTE linear time code and a wordclock generated signal. In SIEMPRE repository 3.3 Binaural Audio Audio (binaural) Stereophonic audio signal acquired for the whole quartet via a binaural dummy head. Binaural dummy head placed in front of the string quartet. Neumann KU-100 SYNCHRONIZATION: Synchronized during acquisition with the Polhemus Motion Capture data using an analog audio pulse signal, and as post-processing with SMPTE linear time code and a wordclock generated signal. In SIEMPRE repository

3.4 Wired motion capture Wired MoCap Wired 6DOF (degrees-of-freedom) sensor data position and orientation. Wired 6DOF sensors attached on the instrument bow and instrument body. Detailed in [1] Polhemus Liberty SYNCHRONIZATION: Synchronized with a second motion capture system via the use of a world clock generator as well as SMPTE linear timecode; synchronized with the audio signal using an analog audio pulse signal. In SIEMPRE repository 3.5 Wireless motion capture Wireless MoCap Wireless 3DOF markers placed on the bodies of the musicians as well as the musical instruments. IR-reflective markers attached on the musicians bodies, using the specification in [2]. IR-reflective markers attached on the instrument body and bow, using a specification derived from [3]. Quailsys Oqus Camera Series SYNCHRONIZATION: Synchronized with the Polhemus MoCap system using a world clock generator as well as SMPTE linear timecode. In SIEMPRE repository 3.6 Video Video Digital video signal of the quartet during performance. Digital video camera placed in front of the string quartet. Canon VIXIA HF R200

SYNCHRONIZATION: Synchronized with the audio signals using SMPTE linear timecode, and with motion capture systems via SMPTE-wordclock correspondence. In SIEMPRE repository 4. CONTRIBUTION BY ITALIAN INSTITUTE OF TECHNOLOGY 4.1 Electromyography EMG Bipolar electromyography activity. The signal consists of analog time series representing muscle contraction over time. Wireless sensors attached to musicians' biceps and triceps. ZeroWire wireless EMG system. SYNCHRONIZATION: Analog signal fed to the A/D acquisition board controlled by the Qualysis system. Data acquired already synchronized with mocap data. Not yet 4.2 Thermography (also involving work by UNIGE-CH) Thermography Thermographic videos of an audience while attending to a live concert/opera or listening/watching music excerpts/video clips. Audience Thermographic camera placed in front of the audience. Cedip Titanium HD 560M (Pelican) Camera. SYNCHRONIZATION: Thermographic video synchronized with audio and/or video signal through a TTL pulse train. Not yet

5. CONTRIBUTION BY QUEENS UNIVERSITY BELFAST 5.1 Quality of Experience questionnaire QoE Questionnaire The Quality of Experience questionnaire was developed to assess the unique experience of a live musical performance for audience members. In addition to the common emotion dimensions the questionnaire also includes items on engagement, communication with the performer, aesthetics and variables from the social and presence literature. Both a long-form (60 items) and short-form (12 items) version were tested during experimentation. Audience. Short-form: QUB December 2011, QUB March 2012, QUB November 2012. Long-form: QUB May 2011, QUB February 2012. In all scenarios the questionnaire was administered to the audience at the start of the experiment. A short background section was completed before any of the performances. After each performance audiences were asked to rate their experience during the performance using the questionnaire. This was done for each performance in all scenarios. The amount of participants who filled out these questionnaires varied depending on the scenario, though all audience members completed it in all scenarios. For analysis of the long-form questionnaire sub-factors were selected a priori, based on the different fields they were taken from in the literature. Further analysis followed standard techniques. The questionnaire was completed with pen and paper and data manually coded into SPSS form where it was analysed. SYNCHRONIZATION: Although synchronization per se was not required for this measure its compatibility with other signals obtained was maximized in a numbers of ways. Seat numbers of each audience member were noted on the questionnaire, as were the experimental group that the participant belonged to in scenarios where that was relevant. In this way individual retrospective responses of self-reported engagement or physiological arousal could be compared to the continuous measures obtained during the performance. A copy of the data from the relevant questionnaire used is included in the datapack for each QUB experiment in the repository.

5.2 Continuous self-report measure Continuous SR A single-variable continuous self-report measure was taken throughout most scenarios, through the use of a device designed for that purpose. The variable chosen for continuous measurement was engagement, both because it has performed well in previous studies and because it complements the QoE questionnaire and physiological measures also being tested. Audience. Without obscuring box: QUB May 2011. With obscuring boxes: QUB December 2011, QUB November 2012, QUB February 2012. For the initial a group of participants were given basic instructions on the operation of the faders and asked to manipulate them for the duration of each performance. Subsequent to this however the participants were given detailed instructions and a test period was introduced in which they manipulated the fader in a number of ways as detailed by the experimenter. There were only fourteen fader boxes constructed for the experiments and therefore fourteen participants used them in each scenario where continuous self-report was taken. After the experiments the synchronized data was extracted to MATLAB where it was analysed with typical continuous data techniques. Physically the device looks as shown below, with a fader that could be manipulated up or down dependent on the participant s current level of engagement with the performance. For these experiments the fader was inversely rated with a spring countermeasure. This design reduced the demand characteristic on the participants during the performance as force was only required to register disengagement and could be judged without having to look at the fader.

It is important to note that based on results obtained and participant feedback after the initial pilot a decision was taken to obscure the fader from other members of the audience and the performers. This was done by means of a simple box which hid the participants hand and resulted in more diverse ratings (see below). SYNCHRONIZATION: Synchronization was required for the faders with other continuous signals such as physiology, motion-capture and video, depending on the

scenario. This was achieved by recording all signal modes with a shared time code (i.e. SMPTE), and using the synchronization tools developed for this purpose. All continuous self-report measure results are available individually on the repository. Here is an repovizz example of four different but synchronized faders all rating a performance as increasingly less engaging as it progressed. 5.3 Physiological measures Physiological measures Measuring key physiological signals of audience members during a live performance was one of the main goals for the QUB experiments. In all of the scenarios where physiological measures were tested the signals taken were electrodermal activity (EDA) and pulse rate. This was done primarily through the use of three sensors placed on three fingertips of the hand. These measures were chosen as they are generally recognized to be the most effective non-intrusive physiological signals for detecting emotional states. Audience. QUB December 2011, QUB February 2012, QUB March 2012, QUB November 2012 In all bar one scenario a sensor each was placed on the index, middle and fourth finger of the left hand, however for the QUB November 2012 scenario a sensor was placed on the upper arm. The number of participants using the sensors was fourteen in all scenarios bar QUB December 2011 where only two participants were tested as a technical assessment for future iterations of the experiment. Participants with the sensors had their function explained and were asked not to move their hand around excessively if possible. The sensors remained on until the end of the experiment.

After the experiments the synchronized data was extracted to MATLAB where a number of techniques were applied so as to obtain the most meaningful features from the raw data. Feature extraction was accomplished using an algorithm devised by QUB (available on the repository or at [4]. This split EDA into tonic and phasic EDA, and extracted HR from the ECG or POX signals, allowing for more nuanced analysis of the results. After this standard physiological analysis techniques could be applied. Through previous work in the area QUB have developed a non-intrusive physiological sensor that can be placed on three fingertips of the hand and measures EDA and pulse (shown below) SYNCHRONIZATION: Synchronization was required for the faders with other continuous signals such as physiology, motion-capture and video, depending on the scenario. This was achieved by recording all signal modes with a shared time code (i.e. SMPTE), and using the synchronization tools developed for this purpose. All physiology results are available individually on the repository for each performance. Shown below is the physiology of two participants with (in descending order) Phasic EDA, Tonic EDA, Heart Rate and Raw POX.

5.4 Motion Capture MoCap Expressive movement is one of the key measures detailed in the overall SIEMPRE project, and thus was part of some of the experiments at QUB. Instead of focusing on one individual however this set of experiments focused on the audience as a whole, with only one marker on each participant. The rationale for doing so was to investigate the synchronization of movement within the audience. Audience. QUB March 2012, QUB November 2012 Motion-capture analysis was implemented in two of the experiments undertaken, both of which employed similar methods and a similar sized audience. Before the experiment the area for the audience was calibrated, and when they arrived thirty participants had a single motion-capture marker placed on the top of their head, all of whom sat beside each other in the front-middle of the audience. The markers remained on the participants for the duration of the performance. The Qualysis software system recorded the data from the markers, and from here it can be analysed in MATLAB or similar programs. Analysis techniques will focus on detecting synchronous motion across any axis between participants, which would indicate a nodding or swaying motion. The Qualysis motion capture system was used for capturing the data, and 6 cameras were placed in high positions around the audience so as to give an overview of all of the participants with markers. The markers themselves were affixed to the participant s heads via a hairclip. SYNCHRONIZATION: Synchronization was required for the faders with other continuous signals such as physiology, motion-capture and video, depending on the scenario. This was achieved by recording all signal modes with a shared

time code (i.e. SMPTE), and using the synchronization tools developed for this purpose. The X, Y and Z co-ordinates for all markers are in the repository, and shown below is a still from the motion-capture of audience during one of the live performances. 5.5 Audio and Video Audio/Video Audio and video of all performances during all scenarios was recorded, and in many of the scenarios video of the audience as well as of the performer was taken. Overall the audiovisual synchronization signals can be used as a reference point for interesting patterns in the continuous data as well as highlighting emotional or behavioural patterns in the audience. Audience. QUB May 2011, QUB December 2011, QUB February 2012, QUB March 2012, QUB November 2012 For all of the experiments audio was recorded via a microphone in the concert area. For the November 2012 experiment a binaural microphone was placed in the middle of the audience to better detect what the audience were hearing as well as any noise they may be making. For each scenario a camera was set-up facing the stage so as to capture the performances, and a separate camera near the stage facing the audience so as to capture their reaction to the performance. Videos capture of the audience was successful for all experiments, however only the Dec 11 and Nov 12 experiments would be suitable for follow-up emotion analysis of the audience. This is due to the need for clear visibility of the facial expressions of roughly 20 audience members

SIEMPRE D3.3 for this type of analysis, the problem being that requires lighting conditions which are not the most ecologically appropriate for a concert. Analysis of the video and audio data was very limited, being used primarily as a reference to explain patterns in the continuous data (e.g. EDA increased when the music became noticeable louder). At some point it is hoped to obtain judged ratings of the audience based on trace techniques. Rhodes Stereo Microphone, Neumann Binaural Head, Sony HDV cameras. SYNCHRONIZATION: Obviously for audiovisual data to be useful as a referent then synchronization to other data was essential. This was achieved by recording all signal modes with a shared time code (i.e. SMPTE), and using the synchronization tools developed for this purpose. As with all other signal the audio and video data is available on the repository. Shown below is a still from the a video of the audience during QUB December 2011: 6. CONTRIBUTION BY VIRGINIA TECH (ALSO INVOLVING WORK BY STANFORD UNIV.) 6.1 Mobile sensor system MobileMuse / Senstream Finger-worn device carrying four sensors for electrodermal activity, pulse oximetry, skin temperature, and tri-axial accelerometry. 1/8 audio output suitable as input to mobile phones, laptops, etc. References are [5] and [6]. Music/listener, Listener/listener. Waseda 2012. Reference is [7].

See above and references: (EDA, pulse oximetry, skin temperature) Interactor physiological data was captured through the MobileMuse device and streamed over UDP to text files at a host computer. Data were time-tagged with millisecond-level accuracy. Video data were also captured and stored alongside participant physiological data. SYNCHRONIZATION: A sequence of tones was played at the beginning of each trial, and each participant squeezed their attached sensor in time with these tones. For Waseda 2012, this method of synchronization has shown to be acceptable by our analysis. Our continuing work is investigating ondevice SMPTE synchronization. Upload to repository currently in progress. On completion, all physiological and video data for Waseda 2012 will be available on the repository. 7. REFERENCES [1] Maestre, E., Bonada, J., Blaauw, M., Pérez, A., Guaus, E. Acquisition of violin instrumental gestures using a commercial EMF device. International Computer Music Conference (ICMC), 2007 [2] http://www.idmil.org/mocap/plug-in-gait+marker+placement.pdf [3] E. Schoonderwaldt and M. Demoucron, Extraction of bowing parameters from violin performance combining motion capture and sensors, J. Acoust. Soc. Am., vol. 126, no. 5, pp. 2695 2708, 2009. [4] http://www.musicsensorsemotion.com/tag/tools/ [5] R. B. Knapp and B. C. Bortz, MobileMuse: Integral Music Control Goes Mobile, presented at the 11th International Conference on New Interfaces for Musical Expression, Oslo, Norway, 2011, pp. 203 206. [6] B. C. Bortz, S. Salazar, J. Jaimovich, R. B. Knapp, and G. Wang, ShEMP: A Mobile Framework for Shared Emotion, Music, and Physiology, 14th ACM international Conference on Multimodal Interaction, 2012. [7] B. Bortz, I. Crandell, R. B. Knapp, Y. Miwa, and S. Itai, Comparison of the Effect of Proximity on Emotional Change in Real and Virtual Shared Spaces, Submitted to the International Conference on Affective Computing and Intelligent Interaction 2013, Geneva, Switzerland.