Analysis of the Occurrence of Laughter in Meetings

Size: px
Start display at page:

Download "Analysis of the Occurrence of Laughter in Meetings"

Transcription

1 Analysis of the Occurrence of Laughter in Meetings Kornel Laskowski 1,2 & Susanne Burger 2 1 interact, Universität Karlsruhe 2 interact, Carnegie Mellon University August 29, 2007

2 Introduction primary motivation: meeting understanding

3 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

4 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

5 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

6 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

7 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

8 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

9 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

10 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

11 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

12 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management

13 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management emotion relevant

14 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management emotion relevant

15 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management emotion relevant laughter detection is particularly important for understanding both interaction and emotion if laughter occurs frequently

16 Introduction primary motivation: meeting understanding vocalization verbal non verbal words word fragments laughter other statements questions backchannel disruption floor grabbers interaction managing both emotion relevant other propositional content interaction management emotion relevant laughter detection is particularly important for understanding both interaction and emotion if laughter occurs frequently to date, for meetings, it is not known 1 how much laughter there actually is 2 when it tends to occur

17 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly.

18 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. essentially monologue

19 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. multi-logue

20 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. multi-logue with more participant involvement

21 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. a mathematical artifact (the Haar wavelet basis)

22 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. multi-logue

23 Text-Independent Modeling of Multi-Participant Meetings To find interaction, model participants jointly. multi-logue with laughter participants tend to wait to speak participants do not wait to laugh

24 Three Questions of Interest 1 What is the quantity of laughter, relative to the quantity of speech?

25 Three Questions of Interest 1 What is the quantity of laughter, relative to the quantity of speech? 2 How does the durational distribution of episodes of laughter differ from that of episodes of speech?

26 Three Questions of Interest 1 What is the quantity of laughter, relative to the quantity of speech? 2 How does the durational distribution of episodes of laughter differ from that of episodes of speech? 3 How do meeting participants appear to affect each other in their use of laughter, relative to their use of speech?

27 Laugh Bouts vs Talk Spurts we will contrast the occurrence of laughter L with that of speech S

28 Laugh Bouts vs Talk Spurts we will contrast the occurrence of laughter L with that of speech S talk spurts contiguous per-participant intervals of speech (Shriberg et al, 2001), containing pauses no longer than 300 ms (as in NIST RT-06s SAD)

29 Laugh Bouts vs Talk Spurts we will contrast the occurrence of laughter L with that of speech S talk spurts contiguous per-participant intervals of speech (Shriberg et al, 2001), containing pauses no longer than 300 ms (as in NIST RT-06s SAD) laugh bouts contiguous per-participant intervals of laughter (Bachorowski et al, 2001), including recovery inhalation

30 Laugh Bouts vs Talk Spurts we will contrast the occurrence of laughter L with that of speech S talk spurts contiguous per-participant intervals of speech (Shriberg et al, 2001), containing pauses no longer than 300 ms (as in NIST RT-06s SAD) laugh bouts contiguous per-participant intervals of laughter (Bachorowski et al, 2001), including recovery inhalation S/L islands contiguous per-group intervals in which at least one participant talks/laughs

31 Laugh Bouts vs Talk Spurts we will contrast the occurrence of laughter L with that of speech S talk spurt laugh bout talk spurt islands laugh bout islands

32 The ICSI Meeting Corpus naturally occurring project-oriented conversations with varying number of participants

33 The ICSI Meeting Corpus naturally occurring project-oriented conversations with varying number of participants the largest such corpus available type # of # of participants meetings mod min max Bed Bmr Bro other

34 The ICSI Meeting Corpus naturally occurring project-oriented conversations with varying number of participants the largest such corpus available type # of # of participants meetings mod min max Bed Bmr Bro other rarely, meetings contain additional, uninstrumented participants (we ignore them)

35 The ICSI Meeting Corpus naturally occurring project-oriented conversations with varying number of participants the largest such corpus available type # of # of participants meetings mod min max Bed Bmr Bro other rarely, meetings contain additional, uninstrumented participants (we ignore them) we use all 75 meetings: 66.3 hours of conversation

36 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up

37 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of:

38 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription

39 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription 2 specifying endpoints for identified laughter

40 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription 2 specifying endpoints for identified laughter 1 orthographic, time-segmented transcription of speaker contributions (.stm) Bmr011 me013 chan Yeah. Bmr011 mn005 chan Film-maker. Bmr011 fe016 chan <Emphasis> colorful. </Emphasi... Bmr011 me011 chanb Of beeps, yeah. Bmr011 fe008 chan <Pause/> of m- one hour of - <... Bmr011 mn014 chan Yeah. Bmr011 me013 chan <VocalSound Description="laugh"/> Bmr011 mn014 chan Yeah. Bmr011 mn005 chan Is - Bmr011 me011 chanb <VocalSound Description="laugh"/>

41 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription 2 specifying endpoints for identified laughter 1 orthographic, time-segmented transcription of speaker contributions (.stm) Yeah Film-maker <Emphasis> colorful. </Emphasis> <Comment Description="while laughing"/> Of beeps, yeah <Pause/> of m- one hour of - <Comment Description="while laughing"/> Yeah <VocalSound Description="laugh"/> Yeah Is <VocalSound Description="laugh"/>

42 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription 2 specifying endpoints for identified laughter 1 orthographic, time-segmented transcription of speaker contributions (.stm) Yeah Film-maker <Emphasis> colorful. </Emphasis> <Comment Description="while laughing"/> Of beeps, yeah <Pause/> of m- one hour of - <Comment Description="while laughing"/> Yeah <VocalSound Description="laugh"/> Yeah Is <VocalSound Description="laugh"/>

43 Identifying Laughter in the ICSI Corpus laughter is already annotated with rich XML-style mark-up therefore, for our purposes, data preprocessing consists of: 1 identifying laughter in the orthographic transcription 2 specifying endpoints for identified laughter 1 orthographic, time-segmented transcription of speaker contributions (.stm) Yeah Film-maker <Emphasis> colorful. </Emphasis> <Comment Description="while laughing"/> Of beeps, yeah <Pause/> of m- one hour of - <Comment Description="while laughing"/> Yeah <VocalSound Description="laugh"/> Yeah Is <VocalSound Description="laugh"/>

44 Sample VocalSound Instances Freq Token Rank Count VocalSound Description laugh breath inbreath mouth breath-laugh laugh-breath 46 6 cough-laugh 63 3 laugh, "hmmph" 69 3 breath while smiling 75 2 very long laugh Used

45 Sample VocalSound Instances Freq Token Rank Count VocalSound Description laugh breath inbreath mouth breath-laugh laugh-breath 46 6 cough-laugh 63 3 laugh, "hmmph" 69 3 breath while smiling 75 2 very long laugh Used laughter is by far the most common non-verbal VocalSound idem for Comment instances

46 Segmenting Identified Laughter Instances found non-farfield VocalSound laughs

47 Segmenting Identified Laughter Instances found non-farfield VocalSound laughs were adjacent to a time-stamped utterance boundary or lexical item: endpoints were derived automatically 725 needed to be segmented manually

48 Segmenting Identified Laughter Instances found non-farfield VocalSound laughs were adjacent to a time-stamped utterance boundary or lexical item: endpoints were derived automatically 725 needed to be segmented manually found 1108 non-farfield Comment laughs all needed to be segmented manually

49 Segmenting Identified Laughter Instances found non-farfield VocalSound laughs were adjacent to a time-stamped utterance boundary or lexical item: endpoints were derived automatically 725 needed to be segmented manually found 1108 non-farfield Comment laughs all needed to be segmented manually manual segmententation performed by one annotator, checked by at least one other annotator

50 Segmenting Identified Laughter Instances found non-farfield VocalSound laughs were adjacent to a time-stamped utterance boundary or lexical item: endpoints were derived automatically 725 needed to be segmented manually found 1108 non-farfield Comment laughs all needed to be segmented manually manual segmententation performed by one annotator, checked by at least one other annotator merging immediately adjacent VocalSound and Comment instances, and removing transcribed instances for which we found counterevidence, resulted in bouts

51 Speech vs Laughter by Time laugh bouts

52 Speech vs Laughter by Time laugh bouts talk spurts

53 Speech vs Laughter by Time laugh bouts talk spurts by personal time:

54 Speech vs Laughter by Time laugh bouts talk spurts by personal time: hours total recorded audio

55 Speech vs Laughter by Time laugh bouts talk spurts by personal time: hours total recorded audio 55.2 hours spent in talk spurts (S), 12.47%

56 Speech vs Laughter by Time laugh bouts talk spurts by personal time: hours total recorded audio 55.2 hours spent in talk spurts (S), 12.47% 5.6 hours spent in laugh bouts (L), 1.27%

57 Speech vs Laughter by Time, by Participant

58 Talk Spurt Duration vs Laugh Bout Duration

59 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L

60 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L in S only, 84.6% of vocalization is not overlapped

61 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L in L only, 35.7% of vocalization is not overlapped

62 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L the proportion of laughed speech is negligible

63 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L there is 3 times as much 3-participant overlap when considering S L as opposed to S only

64 Vocalization Overlap Vocal Activity per part Vocalizing Time, hrs number of simultaneously per vocalizing participants meet S L S L S L there is 25 times as much 4-participant overlap when considering S L as opposed to S only

65 Overlap Dynamics does laughter differ from speech in the way in which overlap arises and is resolved?

66 Overlap Dynamics does laughter differ from speech in the way in which overlap arises and is resolved? look at transition probabilities under a first-order Markov assumption

67 Overlap Dynamics does laughter differ from speech in the way in which overlap arises and is resolved? look at transition probabilities under a first-order Markov assumption 1 discretize L and S segmentations using non-overlapping analysis frames

68 Overlap Dynamics does laughter differ from speech in the way in which overlap arises and is resolved? look at transition probabilities under a first-order Markov assumption 1 discretize L and S segmentations using non-overlapping analysis frames 2 train an Extended Degree-of-Overlap (EDO) model on the discretized L and S segmentations P ({A} {A, B}) P ({A,B} {A}) P ({A} {B}) etc.

69 Overlap Dynamics does laughter differ from speech in the way in which overlap arises and is resolved? look at transition probabilities under a first-order Markov assumption 1 discretize L and S segmentations using non-overlapping analysis frames 2 train an Extended Degree-of-Overlap (EDO) model on the discretized L and S segmentations P ({A} {A, B}) P ({A,B} {A}) P ({A} {B}) etc. 3 compare inferred probabilities for L and S

70 Overlap Dynamics: Results Select EDO Transitions 500ms frames from (at t) to (at t + 1) S L {A} {A} {A} {A, B} {A} {A,B,C, } {A, B} {A} {A, B} {A, B} {A,B} {A,B,C, } {A,B,C, } {A} {A,B,C, } {A,B} {A,B,C, } {A,B,C, }

71 Overlap Dynamics: Results Select EDO Transitions 500ms frames from (at t) to (at t + 1) S L {A} {A} {A} {A, B} {A} {A,B,C, } {A, B} {A} {A, B} {A, B} {A,B} {A,B,C, } {A,B,C, } {A} {A,B,C, } {A,B} {A,B,C, } {A,B,C, }

72 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter

73 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter but participants vary widely (0% - 30%)

74 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter but participants vary widely (0% - 30%) 2 on average, laughter occurs once a minute

75 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter but participants vary widely (0% - 30%) 2 on average, laughter occurs once a minute 3 laughter accounts for the large majority of 3 participant overlap

76 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter but participants vary widely (0% - 30%) 2 on average, laughter occurs once a minute 3 laughter accounts for the large majority of 3 participant overlap 4 in contrast to speech, once laughter overlap is incurred, it is most likely to persist

77 Conclusions Based on the ICSI meetings, 1 approximately 9% of vocalizing time is spent on laughter but participants vary widely (0% - 30%) 2 on average, laughter occurs once a minute 3 laughter accounts for the large majority of 3 participant overlap 4 in contrast to speech, once laughter overlap is incurred, it is most likely to persist ie. 3-participant speech overlap is 2.5 times more likely than laughter to be resolved within 500 ms

78 We would like to thank: our annotators: Jörg Brunstein and Matthew Bell discussion: Alan Black and Liz Shriberg funding: EU CHIL

Detecting Attempts at Humor in Multiparty Meetings

Detecting Attempts at Humor in Multiparty Meetings Detecting Attempts at Humor in Multiparty Meetings Kornel Laskowski Carnegie Mellon University Pittsburgh PA, USA 14 September, 2008 K. Laskowski ICSC 2009, Berkeley CA, USA 1/26 Why bother with humor?

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox Final Project (EECS 94) knoxm@eecs.berkeley.edu December 1, 006 1 Introduction Laughter is a powerful cue in communication. It communicates to listeners the emotional

More information

Automatic Laughter Detection

Automatic Laughter Detection Automatic Laughter Detection Mary Knox 1803707 knoxm@eecs.berkeley.edu December 1, 006 Abstract We built a system to automatically detect laughter from acoustic features of audio. To implement the system,

More information

Automatic Laughter Segmentation. Mary Tai Knox

Automatic Laughter Segmentation. Mary Tai Knox Automatic Laughter Segmentation Mary Tai Knox May 22, 2008 Abstract Our goal in this work was to develop an accurate method to identify laughter segments, ultimately for the purpose of speaker recognition.

More information

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts Gerald Friedland, Luke Gottlieb, Adam Janin International Computer Science Institute (ICSI) Presented by: Katya Gonina What? Novel

More information

Music Segmentation Using Markov Chain Methods

Music Segmentation Using Markov Chain Methods Music Segmentation Using Markov Chain Methods Paul Finkelstein March 8, 2011 Abstract This paper will present just how far the use of Markov Chains has spread in the 21 st century. We will explain some

More information

Automatic discrimination between laughter and speech

Automatic discrimination between laughter and speech Speech Communication 49 (2007) 144 158 www.elsevier.com/locate/specom Automatic discrimination between laughter and speech Khiet P. Truong *, David A. van Leeuwen TNO Human Factors, Department of Human

More information

PSYCHOLOGICAL AND CROSS-CULTURAL EFFECTS ON LAUGHTER SOUND PRODUCTION Marianna De Benedictis Università di Bari

PSYCHOLOGICAL AND CROSS-CULTURAL EFFECTS ON LAUGHTER SOUND PRODUCTION Marianna De Benedictis Università di Bari PSYCHOLOGICAL AND CROSS-CULTURAL EFFECTS ON LAUGHTER SOUND PRODUCTION Marianna De Benedictis marianna_de_benedictis@hotmail.com Università di Bari 1. ABSTRACT The research within this paper is intended

More information

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts Gerald Friedland International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704-1198 fractor@icsi.berkeley.edu

More information

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu

More information

Multimodal databases at KTH

Multimodal databases at KTH Multimodal databases at David House, Jens Edlund & Jonas Beskow Clarin Workshop The QSMT database (2002): Facial & Articulatory motion Clarin Workshop Purpose Obtain coherent data for modelling and animation

More information

A Phonetic Analysis of Natural Laughter, for Use in Automatic Laughter Processing Systems

A Phonetic Analysis of Natural Laughter, for Use in Automatic Laughter Processing Systems A Phonetic Analysis of Natural Laughter, for Use in Automatic Laughter Processing Systems Jérôme Urbain and Thierry Dutoit Université de Mons - UMONS, Faculté Polytechnique de Mons, TCTS Lab 20 Place du

More information

READING NOVEMBER, 2017 Part 5, 7 and 8

READING NOVEMBER, 2017 Part 5, 7 and 8 Name READING 1 1 The reviewer starts with the metaphor of a city map in order to illustrate A the difficulty in understanding the complexity of the internet. B the degree to which the internet changes

More information

DiAML-Annotated Examples

DiAML-Annotated Examples 1 SHORT DIALOGUE FRAGMENTS DiAML-Annotated Examples Overview This annex specifies the DiAML annotation of example dialogues and dialogue fragments. Section B.1 explains the annotation of some very short

More information

Comparison of N-Gram 1 Rank Frequency Data from the Written Texts of the British National Corpus World Edition (BNC) and the author s Web Corpus

Comparison of N-Gram 1 Rank Frequency Data from the Written Texts of the British National Corpus World Edition (BNC) and the author s Web Corpus Comparison of N-Gram 1 Rank Frequency Data from the Written Texts of the British National Corpus World Edition (BNC) and the author s Web Corpus Both sets of texts were preprocessed to provide comparable

More information

NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE STUDY

NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE STUDY Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Limerick, Ireland, December 6-8,2 NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE

More information

Music Radar: A Web-based Query by Humming System

Music Radar: A Web-based Query by Humming System Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN 47907-2107 {cao62, pengh,

More information

The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval

The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval IPEM, Dept. of musicology, Ghent University, Belgium Outline About the MAMI project Aim of the

More information

WASD PA Core Music Curriculum

WASD PA Core Music Curriculum Course Name: Unit: Expression Unit : General Music tempo, dynamics and mood *What is tempo? *What are dynamics? *What is mood in music? (A) What does it mean to sing with dynamics? text and materials (A)

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

Humor: Prosody Analysis and Automatic Recognition for F * R * I * E * N * D * S *

Humor: Prosody Analysis and Automatic Recognition for F * R * I * E * N * D * S * Humor: Prosody Analysis and Automatic Recognition for F * R * I * E * N * D * S * Amruta Purandare and Diane Litman Intelligent Systems Program University of Pittsburgh amruta,litman @cs.pitt.edu Abstract

More information

Automatic transcription is not neutral. Wyke Stommel, Tom Koole, Tessa van Charldorp, Sandra van Dulmen en Antal van den Bosch ADVANT

Automatic transcription is not neutral. Wyke Stommel, Tom Koole, Tessa van Charldorp, Sandra van Dulmen en Antal van den Bosch ADVANT Automatic transcription is not neutral Wyke Stommel, Tom Koole, Tessa van Charldorp, Sandra van Dulmen en Antal van den Bosch ADVANT Automated annotation and analysis. Tom Koole Wyke Stommel Tessa van

More information

Correlation to the Common Core State Standards

Correlation to the Common Core State Standards Correlation to the Common Core State Standards Go Math! 2011 Grade 4 Common Core is a trademark of the National Governors Association Center for Best Practices and the Council of Chief State School Officers.

More information

IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC

IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC Ashwin Lele #, Saurabh Pinjani #, Kaustuv Kanti Ganguli, and Preeti Rao Department of Electrical Engineering, Indian

More information

AUTOMATIC RECOGNITION OF LAUGHTER

AUTOMATIC RECOGNITION OF LAUGHTER AUTOMATIC RECOGNITION OF LAUGHTER USING VERBAL AND NON-VERBAL ACOUSTIC FEATURES Tomasz Jacykiewicz 1 Dr. Fabien Ringeval 2 JANUARY, 2014 DEPARTMENT OF INFORMATICS - MASTER PROJECT REPORT Département d

More information

PulseCounter Neutron & Gamma Spectrometry Software Manual

PulseCounter Neutron & Gamma Spectrometry Software Manual PulseCounter Neutron & Gamma Spectrometry Software Manual MAXIMUS ENERGY CORPORATION Written by Dr. Max I. Fomitchev-Zamilov Web: maximus.energy TABLE OF CONTENTS 0. GENERAL INFORMATION 1. DEFAULT SCREEN

More information

Advertisement Detection and Replacement using Acoustic and Visual Repetition

Advertisement Detection and Replacement using Acoustic and Visual Repetition Advertisement Detection and Replacement using Acoustic and Visual Repetition Michele Covell and Shumeet Baluja Google Research, Google Inc. 1600 Amphitheatre Parkway Mountain View CA 94043 Email: covell,shumeet

More information

BioGraph Infiniti Physiology Suite

BioGraph Infiniti Physiology Suite Thought Technology Ltd. 2180 Belgrave Avenue, Montreal, QC H4A 2L8 Canada Tel: (800) 361-3651 ٠ (514) 489-8251 Fax: (514) 489-8255 E-mail: mail@thoughttechnology.com Webpage: http://www.thoughttechnology.com

More information

ESS Questions administered by telephone or in person:

ESS Questions administered by telephone or in person: faculty of arts 1 1 ESS Questions administered by telephone or in person: Differences in interviewer-respondent interactions Yfke Ongena (University of Groningen) Marieke Haan (Utrecht University) Modes

More information

Time We Have Left. Episode 6 "First Day Back" Written By. Jason R. Harris

Time We Have Left. Episode 6 First Day Back Written By. Jason R. Harris Time We Have Left. Episode 6 "First Day Back" Written By Jason R. Harris Jrharris345@gmail.com (614)905-6322 1 FADE IN: INT. MARTIN HOUSEHOLD - MORNING MARTIN, 16, average height, handsome, dark brown

More information

Phone-based Plosive Detection

Phone-based Plosive Detection Phone-based Plosive Detection 1 Andreas Madsack, Grzegorz Dogil, Stefan Uhlich, Yugu Zeng and Bin Yang Abstract We compare two segmentation approaches to plosive detection: One aproach is using a uniform

More information

A CONVERSATION ANALYSIS OF VERBAL BACKCHANNEL RESPONSE IN RADIO PROGRAM VALENTINE IN THE MORNING INTERVIEW WITH MICHAEL BUBLÉ.

A CONVERSATION ANALYSIS OF VERBAL BACKCHANNEL RESPONSE IN RADIO PROGRAM VALENTINE IN THE MORNING INTERVIEW WITH MICHAEL BUBLÉ. A Conversation Analysis... (Nur Wulandari) 3 585 A CONVERSATION ANALYSIS OF VERBAL BACKCHANNEL RESPONSE IN RADIO PROGRAM VALENTINE IN THE MORNING INTERVIEW WITH MICHAEL BUBLÉ By: Nur Wulandari State University

More information

PS User Guide Series Seismic-Data Display

PS User Guide Series Seismic-Data Display PS User Guide Series 2015 Seismic-Data Display Prepared By Choon B. Park, Ph.D. January 2015 Table of Contents Page 1. File 2 2. Data 2 2.1 Resample 3 3. Edit 4 3.1 Export Data 4 3.2 Cut/Append Records

More information

Smile and Laughter in Human-Machine Interaction: a study of engagement

Smile and Laughter in Human-Machine Interaction: a study of engagement Smile and ter in Human-Machine Interaction: a study of engagement Mariette Soury 1,2, Laurence Devillers 1,3 1 LIMSI-CNRS, BP133, 91403 Orsay cedex, France 2 University Paris 11, 91400 Orsay, France 3

More information

6.5 Percussion scalograms and musical rhythm

6.5 Percussion scalograms and musical rhythm 6.5 Percussion scalograms and musical rhythm 237 1600 566 (a) (b) 200 FIGURE 6.8 Time-frequency analysis of a passage from the song Buenos Aires. (a) Spectrogram. (b) Zooming in on three octaves of the

More information

Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs)

Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs) Project: IEEE P802.15 Working Group for Wireless Personal Area Networks (WPANs) Title: [Radio Specification Analysis of Draft FSK PHY] Date Submitted: [11 March 2012] Source: [Steve Jillings] Company:

More information

Work Package 9. Deliverable 32. Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces

Work Package 9. Deliverable 32. Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces Work Package 9 Deliverable 32 Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces Table Of Contents 1 INTRODUCTION... 3 1.1 SCOPE OF WORK...3 1.2 DATA AVAILABLE...3 2 PREFIX...

More information

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,

More information

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION H. Pan P. van Beek M. I. Sezan Electrical & Computer Engineering University of Illinois Urbana, IL 6182 Sharp Laboratories

More information

BA in Acting, Fall 2018

BA in Acting, Fall 2018 BA in Acting, Fall 2018 A program supported by Chinese Government Scholarships-Silk Road Program About the Program Start Time September 2018 Degree Awarded Bachelor of Arts Major Acting Period 1+4 years

More information

Structured training for large-vocabulary chord recognition. Brian McFee* & Juan Pablo Bello

Structured training for large-vocabulary chord recognition. Brian McFee* & Juan Pablo Bello Structured training for large-vocabulary chord recognition Brian McFee* & Juan Pablo Bello Small chord vocabularies Typically a supervised learning problem N C:maj C:min C#:maj C#:min D:maj D:min......

More information

Exercise 1: Muscles in Face used for Smiling and Frowning Aim: To study the EMG activity in muscles of the face that work to smile or frown.

Exercise 1: Muscles in Face used for Smiling and Frowning Aim: To study the EMG activity in muscles of the face that work to smile or frown. Experiment HP-9: Facial Electromyograms (EMG) and Emotion Exercise 1: Muscles in Face used for Smiling and Frowning Aim: To study the EMG activity in muscles of the face that work to smile or frown. Procedure

More information

Real-time spectrum analyzer. Gianfranco Miele, Ph.D

Real-time spectrum analyzer. Gianfranco Miele, Ph.D Real-time spectrum analyzer Gianfranco Miele, Ph.D www.eng.docente.unicas.it/gianfranco_miele g.miele@unicas.it The evolution of RF signals Nowadays we can assist to the increasingly widespread success

More information

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1 02/18 Using the new psychoacoustic tonality analyses 1 As of ArtemiS SUITE 9.2, a very important new fully psychoacoustic approach to the measurement of tonalities is now available., based on the Hearing

More information

arxiv: v1 [cs.sd] 8 Jun 2016

arxiv: v1 [cs.sd] 8 Jun 2016 Symbolic Music Data Version 1. arxiv:1.5v1 [cs.sd] 8 Jun 1 Christian Walder CSIRO Data1 7 London Circuit, Canberra,, Australia. christian.walder@data1.csiro.au June 9, 1 Abstract In this document, we introduce

More information

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition INSTITUTE FOR SIGNAL AND INFORMATION PROCESSING homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition May 3,

More information

Acoustic synchronization: Rebuttal of Thomas reply to Linsker et al.

Acoustic synchronization: Rebuttal of Thomas reply to Linsker et al. Acoustic synchronization: Rebuttal of Thomas reply to Linsker et al. R Linsker and RL Garwin IBM T. J. Watson Research Center, P. O. Box 218, Yorktown Heights 10598, USA H Chernoff Statistics Department,

More information

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed, VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS O. Javed, S. Khan, Z. Rasheed, M.Shah {ojaved, khan, zrasheed, shah}@cs.ucf.edu Computer Vision Lab School of Electrical Engineering and Computer

More information

Temporal data mining for root-cause analysis of machine faults in automotive assembly lines

Temporal data mining for root-cause analysis of machine faults in automotive assembly lines 1 Temporal data mining for root-cause analysis of machine faults in automotive assembly lines Srivatsan Laxman, Basel Shadid, P. S. Sastry and K. P. Unnikrishnan Abstract arxiv:0904.4608v2 [cs.lg] 30 Apr

More information

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine Project: Real-Time Speech Enhancement Introduction Telephones are increasingly being used in noisy

More information

Fusion for Audio-Visual Laughter Detection

Fusion for Audio-Visual Laughter Detection Fusion for Audio-Visual Laughter Detection Boris Reuderink September 13, 7 2 Abstract Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of

More information

Analysis of the effects of signal distance on spectrograms

Analysis of the effects of signal distance on spectrograms 2014 Analysis of the effects of signal distance on spectrograms SGHA 8/19/2014 Contents Introduction... 3 Scope... 3 Data Comparisons... 5 Results... 10 Recommendations... 10 References... 11 Introduction

More information

Transcription of the Singing Melody in Polyphonic Music

Transcription of the Singing Melody in Polyphonic Music Transcription of the Singing Melody in Polyphonic Music Matti Ryynänen and Anssi Klapuri Institute of Signal Processing, Tampere University Of Technology P.O.Box 553, FI-33101 Tampere, Finland {matti.ryynanen,

More information

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller)

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller) Topic 11 Score-Informed Source Separation (chroma slides adapted from Meinard Mueller) Why Score-informed Source Separation? Audio source separation is useful Music transcription, remixing, search Non-satisfying

More information

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation.

Assessment may include recording to be evaluated by students, teachers, and/or administrators in addition to live performance evaluation. Title of Unit: Choral Concert Performance Preparation Repertoire: Simple Gifts (Shaker Song). Adapted by Aaron Copland, Transcribed for Chorus by Irving Fine. Boosey & Hawkes, 1952. Level: NYSSMA Level

More information

Revolutionizing the Transfer Die Industry for Maximum Production. STM Manufacturing, Inc.

Revolutionizing the Transfer Die Industry for Maximum Production. STM Manufacturing, Inc. Revolutionizing the Transfer Die Industry for Maximum Production. STM Manufacturing, Inc. Design/Build Case Study 7 station, 2-out, Support Member (servo press, servo transfer) Problems: Previous home-line

More information

LAB 1: Plotting a GM Plateau and Introduction to Statistical Distribution. A. Plotting a GM Plateau. This lab will have two sections, A and B.

LAB 1: Plotting a GM Plateau and Introduction to Statistical Distribution. A. Plotting a GM Plateau. This lab will have two sections, A and B. LAB 1: Plotting a GM Plateau and Introduction to Statistical Distribution This lab will have two sections, A and B. Students are supposed to write separate lab reports on section A and B, and submit the

More information

Medium and High Voltage Circuit Breakers Characteristic Time Quantities of the Circuit Breaker with Applications

Medium and High Voltage Circuit Breakers Characteristic Time Quantities of the Circuit Breaker with Applications Workshop 6: Maintenance and monitoring Medium and High Voltage Circuit Breakers Characteristic Time Quantities of the Circuit Breaker with Applications Alexander Herrera OMICRON electronics GmbH 3 December

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

Experiments with Fisher Data

Experiments with Fisher Data Experiments with Fisher Data Gunnar Evermann, Bin Jia, Kai Yu, David Mrva Ricky Chan, Mark Gales, Phil Woodland May 16th 2004 EARS STT Meeting May 2004 Montreal Overview Introduction Pre-processing 2000h

More information

Finding Alternative Musical Scales

Finding Alternative Musical Scales Finding Alternative Musical Scales John Hooker Carnegie Mellon University October 2017 1 Advantages of Classical Scales Pitch frequencies have simple ratios. Rich and intelligible harmonies Multiple keys

More information

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection Kadir A. Peker, Ajay Divakaran, Tom Lanning Mitsubishi Electric Research Laboratories, Cambridge, MA, USA {peker,ajayd,}@merl.com

More information

Critical Thinking 4.2 First steps in analysis Overcoming the natural attitude Acknowledging the limitations of perception

Critical Thinking 4.2 First steps in analysis Overcoming the natural attitude Acknowledging the limitations of perception 4.2.1. Overcoming the natural attitude The term natural attitude was used by the philosopher Alfred Schütz to describe the practical, common-sense approach that we all adopt in our daily lives. We assume

More information

TRANSCRIBING GUIDELINES

TRANSCRIBING GUIDELINES TRANSCRIBING GUIDELINES Transcribing the interview is the most tedious part of the oral history process, but in many ways one of the most important. A transcript provides future researchers a useful format

More information

Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016

Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016 Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016 Jordi Bonada, Martí Umbert, Merlijn Blaauw Music Technology Group, Universitat Pompeu Fabra, Spain jordi.bonada@upf.edu,

More information

EXTERNAL. A digital version of this document is available to download and submit online at ADDRESS POST CODE

EXTERNAL. A digital version of this document is available to download and submit online at   ADDRESS POST CODE CHECK LIST EXTERNAL i A digital version of this document is available to download and submit online at www.thorlux.com/commissioning To secure your preferred commissioning date please complete this form

More information

USING MUSICAL STRUCTURE TO ENHANCE AUTOMATIC CHORD TRANSCRIPTION

USING MUSICAL STRUCTURE TO ENHANCE AUTOMATIC CHORD TRANSCRIPTION 10th International Society for Music Information Retrieval Conference (ISMIR 2009) USING MUSICL STRUCTURE TO ENHNCE UTOMTIC CHORD TRNSCRIPTION Matthias Mauch, Katy Noland, Simon Dixon Queen Mary University

More information

Story Tracking in Video News Broadcasts. Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004

Story Tracking in Video News Broadcasts. Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004 Story Tracking in Video News Broadcasts Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004 Acknowledgements Motivation Modern world is awash in information Coming from multiple sources Around the clock

More information

CHAPTER I INTRODUCTION. language such as in a play or a film. Meanwhile the written dialogue is a dialogue

CHAPTER I INTRODUCTION. language such as in a play or a film. Meanwhile the written dialogue is a dialogue CHAPTER I INTRODUCTION 1.1 Background of the Study Dialogue, according to Oxford 7 th edition, is a conversation in a book, play or film. While the conversation itself is an informal talk involving a small

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based

More information

Metonymy and Metaphor in Cross-media Semantic Interplay

Metonymy and Metaphor in Cross-media Semantic Interplay Metonymy and Metaphor in Cross-media Semantic Interplay The COSMOROE Framework & Annotated Corpus Katerina Pastra Institute for Language & Speech Processing ATHENA Research Center Athens, Greece kpastra@ilsp.gr

More information

Interface Practices Subcommittee SCTE STANDARD SCTE Measurement Procedure for Noise Power Ratio

Interface Practices Subcommittee SCTE STANDARD SCTE Measurement Procedure for Noise Power Ratio Interface Practices Subcommittee SCTE STANDARD SCTE 119 2018 Measurement Procedure for Noise Power Ratio NOTICE The Society of Cable Telecommunications Engineers (SCTE) / International Society of Broadband

More information

Magnetic Rower. Manual Jetstream JMR-5000

Magnetic Rower. Manual Jetstream JMR-5000 Magnetic Rower Manual Jetstream JMR-5000 EXPLODED DRAWING PARTS LIST Assembly front stabilizer with main frame Step 1. Secure the front stabilizer (A2) and main frame(a1) using carriage bolt(1) & Nut(2).

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

Sampling Issues in Image and Video

Sampling Issues in Image and Video Sampling Issues in Image and Video Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview and Logistics Last Time: Motion analysis Geometric relations and manipulations

More information

VISSIM Tutorial. Starting VISSIM and Opening a File CE 474 8/31/06

VISSIM Tutorial. Starting VISSIM and Opening a File CE 474 8/31/06 VISSIM Tutorial Starting VISSIM and Opening a File Click on the Windows START button, go to the All Programs menu and find the PTV_Vision directory. Start VISSIM by selecting the executable file. The following

More information

Measurement of automatic brightness control in televisions critical for effective policy-making

Measurement of automatic brightness control in televisions critical for effective policy-making Measurement of automatic brightness control in televisions critical for effective policy-making Michael Scholand CLASP Europe Flat 6 Bramford Court High Street, Southgate London, N14 6DH United Kingdom

More information

Text Type Classification for the Historical DTA Corpus

Text Type Classification for the Historical DTA Corpus Text Type Classification for the Historical DTA Corpus Susanne Haaf Deutsches Textarchiv, BBAW Berlin NeDiMAH-CLARIN-Workshop Exploring Historical Sources with Language Technology: Results and Perspectives

More information

Laughter Among Deaf Signers

Laughter Among Deaf Signers Laughter Among Deaf Signers Robert R. Provine University of Maryland, Baltimore County Karen Emmorey San Diego State University The placement of laughter in the speech of hearing individuals is not random

More information

... A Pseudo-Statistical Approach to Commercial Boundary Detection. Prasanna V Rangarajan Dept of Electrical Engineering Columbia University

... A Pseudo-Statistical Approach to Commercial Boundary Detection. Prasanna V Rangarajan Dept of Electrical Engineering Columbia University A Pseudo-Statistical Approach to Commercial Boundary Detection........ Prasanna V Rangarajan Dept of Electrical Engineering Columbia University pvr2001@columbia.edu 1. Introduction Searching and browsing

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

2014A Cappella Harmonv Academv Handout #2 Page 1. Sweet Adelines International Balance & Blend Joan Boutilier

2014A Cappella Harmonv Academv Handout #2 Page 1. Sweet Adelines International Balance & Blend Joan Boutilier 2014A Cappella Harmonv Academv Page 1 The Role of Balance within the Judging Categories Music: Part balance to enable delivery of complete, clear, balanced chords Balance in tempo choice and variation

More information

TR 038 SUBJECTIVE EVALUATION OF HYBRID LOG GAMMA (HLG) FOR HDR AND SDR DISTRIBUTION

TR 038 SUBJECTIVE EVALUATION OF HYBRID LOG GAMMA (HLG) FOR HDR AND SDR DISTRIBUTION SUBJECTIVE EVALUATION OF HYBRID LOG GAMMA (HLG) FOR HDR AND SDR DISTRIBUTION EBU TECHNICAL REPORT Geneva March 2017 Page intentionally left blank. This document is paginated for two sided printing Subjective

More information

R&S FSW-B512R Real-Time Spectrum Analyzer 512 MHz Specifications

R&S FSW-B512R Real-Time Spectrum Analyzer 512 MHz Specifications R&S FSW-B512R Real-Time Spectrum Analyzer 512 MHz Specifications Data Sheet Version 02.00 CONTENTS Definitions... 3 Specifications... 4 Level... 5 Result display... 6 Trigger... 7 Ordering information...

More information

Music Information Retrieval Using Audio Input

Music Information Retrieval Using Audio Input Music Information Retrieval Using Audio Input Lloyd A. Smith, Rodger J. McNab and Ian H. Witten Department of Computer Science University of Waikato Private Bag 35 Hamilton, New Zealand {las, rjmcnab,

More information

Pre-processing of revolution speed data in ArtemiS SUITE 1

Pre-processing of revolution speed data in ArtemiS SUITE 1 03/18 in ArtemiS SUITE 1 Introduction 1 TTL logic 2 Sources of error in pulse data acquisition 3 Processing of trigger signals 5 Revolution speed acquisition with complex pulse patterns 7 Introduction

More information

Knowledge-Based Systems

Knowledge-Based Systems Knowledge-Based Systems xxx (2014) xxx xxx Contents lists available at ScienceDirect Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys Time for laughter Francesca Bonin a,b,, Nick

More information

Composer Style Attribution

Composer Style Attribution Composer Style Attribution Jacqueline Speiser, Vishesh Gupta Introduction Josquin des Prez (1450 1521) is one of the most famous composers of the Renaissance. Despite his fame, there exists a significant

More information

5. Analysing Casual Conversation Contents

5. Analysing Casual Conversation Contents English Discourse Analysis: Topic 5: Analysing Casual Conversation Rachel Whittaker (Grp 41) Mick O Donnell, Laura Hidalgo (Grp 46) Contents 5.1 What is Casual Conversation? 5.2 Transcribing casual conversation

More information

C8491 C8000 1/17. digital audio modular processing system. 3G/HD/SD-SDI DSP 4/8/16 audio channels. features. block diagram

C8491 C8000 1/17. digital audio modular processing system. 3G/HD/SD-SDI DSP 4/8/16 audio channels. features. block diagram features 4 / 8 / 16 channel LevelMagic2 SDI-DSP with level or loudness (ITU-BS.1770-1/ ITU-BS.1770-2, EBU R128) control 16 channel 3G/HD/SD-SDI de-embedder 16 in 16 de-embedder matrix 16 channel 3G/HD/SD-SDI

More information

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB Laboratory Assignment 3 Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB PURPOSE In this laboratory assignment, you will use MATLAB to synthesize the audio tones that make up a well-known

More information

How to give a telling talk

How to give a telling talk Communication in Computer Science How to give a telling talk Olivier Danvy version of 26 Oct 2015 at 10:30 Olivier Danvy, 2015-10-05 0 / 50 Before the talk: the form Be aware of the microphone, the beamer,

More information

jsymbolic 2: New Developments and Research Opportunities

jsymbolic 2: New Developments and Research Opportunities jsymbolic 2: New Developments and Research Opportunities Cory McKay Marianopolis College and CIRMMT Montreal, Canada 2 / 30 Topics Introduction to features (from a machine learning perspective) And how

More information

Music 209 Advanced Topics in Computer Music Lecture 4 Time Warping

Music 209 Advanced Topics in Computer Music Lecture 4 Time Warping Music 209 Advanced Topics in Computer Music Lecture 4 Time Warping 2006-2-9 Professor David Wessel (with John Lazzaro) (cnmat.berkeley.edu/~wessel, www.cs.berkeley.edu/~lazzaro) www.cs.berkeley.edu/~lazzaro/class/music209

More information

The MAHNOB Laughter Database. Stavros Petridis, Brais Martinez, Maja Pantic

The MAHNOB Laughter Database. Stavros Petridis, Brais Martinez, Maja Pantic Accepted Manuscript The MAHNOB Laughter Database Stavros Petridis, Brais Martinez, Maja Pantic PII: S0262-8856(12)00146-1 DOI: doi: 10.1016/j.imavis.2012.08.014 Reference: IMAVIS 3193 To appear in: Image

More information

ME 515 Mechatronics. Introduction to Digital Electronics

ME 515 Mechatronics. Introduction to Digital Electronics ME 55 Mechatronics /5/26 ME 55 Mechatronics Digital Electronics Asanga Ratnaweera Department of Faculty of Engineering University of Peradeniya Tel: 8239 (3627) Email: asangar@pdn.ac.lk Introduction to

More information

Reference. TDS7000 Series Digital Phosphor Oscilloscopes

Reference. TDS7000 Series Digital Phosphor Oscilloscopes Reference TDS7000 Series Digital Phosphor Oscilloscopes 07-070-00 0707000 To Use the Front Panel You can use the dedicated, front-panel knobs and buttons to do the most common operations. Turn INTENSITY

More information