This full text version, available on TeesRep, is the post-print (final version prior to publication) of:

Similar documents
An Emotionally Responsive AR Art Installation

Expressive Multimodal Conversational Acts for SAIBA agents

MAKING INTERACTIVE GUIDES MORE ATTRACTIVE

Development of extemporaneous performance by synthetic actors in the rehearsal process

Drama Targets are record sheets for R-7 drama students. Use them to keep records of students drama vocabulary, performances and achievement of SACSA

Managing a non-linear scenario A narrative evolution

Laugh when you re winning

Years 5 and 6 standard elaborations Australian Curriculum: Drama

Lian Loke and Toni Robertson (eds) ISBN:

Exploring Choreographers Conceptions of Motion Capture for Full Body Interaction

Years 9 and 10 standard elaborations Australian Curriculum: Drama

Building Synthetic Actors for Interactive Dramas

Creative Arts Subject Drama YEAR 7

Enhancing Music Maps

Story Visualization Techniques for Interactive Drama

Expressive performance in music: Mapping acoustic cues onto facial expressions

A Model and an Interactive System for Plot Composition and Adaptation, based on Plan Recognition and Plan Generation

International School of Kenya Creative Arts High School Theatre Arts (Drama)

Empirical Evaluation of Animated Agents In a Multi-Modal E-Retail Application

Interacting with a Virtual Conductor

Stories Animated: A Framework for Personalized Interactive Narratives using Filtering of Story Characteristics

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

Electronic Musicological Review

Embodied music cognition and mediation technology

Opening musical creativity to non-musicians

Montana Content Standards for Arts Grade-by-Grade View

2015 Arizona Arts Standards. Theatre Standards K - High School

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts

Speech Recognition and Signal Processing for Broadcast News Transcription

New Hampshire Curriculum Framework for the Arts. Theatre K-12

Concept of ELFi Educational program. Android + LEGO

Allen ISD Bundled Curriculum Document. Grade level Time Allotted: Days Content Area Theatre 2 Unit 1 Unit Name:

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

BEGINNING VIDEO PRODUCTION. Total Classroom Laboratory/CC/CVE

HyLive: Hypervideo-Authoring for Live Television

Structural Models for Interactive Drama

ESP: Expression Synthesis Project

Seizing the Senior Syllabus. Presenter: Rachel Ford Date: 10 th March 2018

Drama & Theater. Colorado Sample Graduation Competencies and Evidence Outcomes. Drama & Theater Graduation Competency 1

A Virtual Camera Team for Lecture Recording

Image Theatre ~ Forum Theatre ~ Invisible Theatre FORMS OF THEATRE OF THE OPPRESSED

Specific Learner Expectations. Developing Practical Knowledge

Factors of Characterisation and Urban Content

Environment Expression: Expressing Emotions through Cameras, Lights and Music

Approaches to teaching film

A System for Acoustic Chord Transcription and Key Extraction from Audio Using Hidden Markov models Trained on Synthesized Audio

Art, Interaction and Engagement

15th International Conference on New Interfaces for Musical Expression (NIME)

Hidden Markov Model based dance recognition

Perception of Intensity Incongruence in Synthesized Multimodal Expressions of Laughter

Improvisation and Performance as Models for Interacting with Stories

Marking Exercise on Sound and Editing (These scripts were part of the OCR Get Ahead INSET Training sessions in autumn 2009 and used in the context of

Computer Coordination With Popular Music: A New Research Agenda 1

Reciprocal Transformations between Music and Architecture as a Real-Time Supporting Mechanism in Urban Design

Interactive Virtual Laboratory for Distance Education in Nuclear Engineering. Abstract

YARMI: an Augmented Reality Musical Instrument

Smile and Laughter in Human-Machine Interaction: a study of engagement

An ecological approach to multimodal subjective music similarity perception

ABSOLUTE OR RELATIVE? A NEW APPROACH TO BUILDING FEATURE VECTORS FOR EMOTION TRACKING IN MUSIC

Privacy Level Indicating Data Leakage Prevention System

DIAGRAM LILYAN KRIS FILLMORE TRACKS DENOTATIVE CONNOTATIVE

TEST SUMMARY AND FRAMEWORK TEST SUMMARY

WRoCAH White Rose NETWORK Expressive nonverbal communication in ensemble performance

Laughter and Smile Processing for Human-Computer Interactions

AHRC ICT Methods Network Workshop De Montfort Univ./Leicester 12 June 2007 New Protocols in Electroacoustic Music Analysis

Drama Scheme of Work map for all year groups

Analysing Musical Pieces Using harmony-analyser.org Tools

VISUALIZING BITS AS URBAN SEMIOTICS

A prototype system for rule-based expressive modifications of audio recordings

2. Preamble 3. Information on the legal framework 4. Core principles 5. Further steps. 1. Occasion

Expressive information

Music in Practice SAS 2015

Shimon: An Interactive Improvisational Robotic Marimba Player

Syllabus Snapshot. by Amazing Brains. Exam Body: CCEA Level: GCSE Subject: Moving Image Arts

GRADE 11 NOVEMBER 2013 DRAMATIC ARTS

PANTOMIME. Year 7 Unit 2

Implementation of a turbo codes test bed in the Simulink environment

Higher Drama Revision Guide

Algorithmic Music Composition

Structural Writing, a Design Principle for Interactive Drama

FORMAL METHODS INTRODUCTION

Drama Year 7 Curriculum Map Spring One: Silent Movie s.

Blog: nickandonovski.wordpress.com

Real Time Face Detection System for Safe Television Viewing

Gateway Performing Arts Fall 2018 Classes Session 1: August 20 September 27 CLASS OVERVIEW

Computational Laughing: Automatic Recognition of Humorous One-liners

Citation Proximity Analysis (CPA) A new approach for identifying related work based on Co-Citation Analysis

Real-time composition of image and sound in the (re)habilitation of children with special needs: a case study of a child with cerebral palsy

Enhanced TV for the Promotion of Active Ageing

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts

Playful Sounds From The Classroom: What Can Designers of Digital Music Games Learn From Formal Educators?

Assess the contribution of symbolic interactionism to the understanding of communications and social interactions

NORMANTON STATE SCHOOL CURRICULUM OVERVIEW. THE ARTS (Including Visual Arts, Dance, Drama, Media Arts)

XJTAG DFT Assistant for

MARK SCHEME for the May/June 2008 question paper 0411 DRAMA. 0411/01 Paper 1 (Written Examination), maximum raw mark 80

Bibliometric glossary

Music Radar: A Web-based Query by Humming System

Automatic music transcription

Social Interaction based Musical Environment

Transcription:

This full text version, available on TeesRep, is the post-print (final version prior to publication) of: Charles, F. et. al. (2007) 'Affective interactive narrative in the CALLAS Project', 4th international conference on virtual storytelling, ICVS 2007, Saint-Malo, December 5-7, in 4th international conference on virtual storytelling proceedings. Heidelberg: Springer Berlin, pp.210-213. This document was downloaded from http://tees.openrepository.com/tees/handle/10149/100226 Please do not use this version for citation purposes. All items in TeesRep are protected by copyright, with all rights reserved, unless otherwise indicated. TeesRep: Teesside University's Research Repository http://tees.openrepository.com/tees/

Affective Interactive Narrative in the CALLAS Project Fred Charles 1, Samuel Lemercier 1, Thurid Vogt 2, Nikolaus Bee 2, Maurizio Mancini 3, Jérôme Urbain 4, Marc Price 5, Elisabeth André 2, Catherine Pélachaud 3, and Marc Cavazza 1 1 School of Computing, University of Teesside, United Kingdom. {f.charles, s.lemercier, m.o.cavazza}@tees.ac.uk 2 Multimedia Concepts and Applications Group, Augsburg University, Germany. {vogt, bee, andre}@informatik.uni-augsburg.de 3 IUT of Montreuil, University Paris VIII, France. {m.mancini, c.pelachaud}@iut-univ.paris8.fr 4 Faculté Polytechnique de Mons, Department of Electrical Engineering, TCTS Lab, Belgium. jerome.urbain@fpms.ac.be 5 BBC Research, Tadworth, Surrey, United Kingdom. marc.price@rd.bbc.co.uk 1 Introduction Interactive Narrative relies on the ability for the user (and spectator) to intervene in the course of events so as to influence the unfolding of the story. This influence is obviously different depending on the Interactive Narrative paradigm being implemented, i.e. the user being a spectator or taking part in the action herself as a character. If we consider the case of an active spectator influencing the narrative, most systems implemented to date [1] have been based on the direct intervention of the user either on physical objects staged in the virtual narrative environment or on the characters themselves via natural language input [1] [3]. While this is certainly empowering the spectator, there may be limitations as to the realism of that mode of interaction if we were to transpose Interactive Narrative for a vast audience. Spontaneous audience reactions are not always as structured and well-defined as previous Interactive Narrative systems have assumed. If we consider that the narrative experience can be essentially interpreted as generating various emotional states (e.g. tension) which derive from its aesthetic qualities (e.g. suspense [6]), a logical consequence would be to analyse spectator s emotional reactions and use these as an input to an Interactive Narrative system. Such an approach would actually constitute a feedback loop between an Interactive Narrative inducing emotions and the analysis of the quality and intensity of such emotions expressed by the user. It is notoriously difficult to accurately detect and categorise spontaneous affective states occurring when users are engaged with various media. This is why we have revised the affective loop described above and, in an attempt to improve the elicitation of user emotional

reactions we have inserted a virtual agent, acting as a co-spectator into that loop (see an illustration of the installation in Fig. 1). The system can now be described as comprising i) an interactive narrative using traditional plan-based generative techniques, which is able to create situations exhibiting different levels of tension or suspense (by featuring the main character in dangerous situations) ii) an expressive virtual character (implemented using the Greta system [4]) whose role is, by accessing the internal data of the narrative planner to exaggerate the emotional value of a given scene so as to make it more visible to the user and iii) affective input devices, which at the current stage of development of the system are limited to an affective speech detection system (EmoVoice [5]) and a multi-keyword spotting system detecting emotionally charged words and expressions. Fig. 1. Affective Interactive Narrative installation. Overall, the system operates by generating narrative situations of various levels of intensity and tension, which are conveyed to the user via the additional channel of the expressive character. The system then detects in real-time the emotional state of the user, in this first version mostly through its vocal reactions 1. Finally, the emotion detected is used as a feedback on the story generation system to reinforce (positive feedback) or slow down (negative feedback) the narrative tension of the generated story. 2 System Overview and Results We present a brief overview of the integrated system for the Affective Interactive Narrative installation (see Fig. 2) as well as some early results. The visualisation component is drawn from the character-based interactive storytelling system developed by Cavazza et al. [1] on top of the UT 2003 computer game engine (Epic Games). 1 Because vocal reactions correspond to a strong level of arousal, the expressive character plays an active role in increasing the user s reactivity. Future versions of the system will include the analysis of paralinguistic input (including silence) and video analysis of user s posture.

Fig. 2. System overview illustrating the sequence of processes. The narrative engine is essentially a HTN planner determining for the main virtual actor what action it should take next. The actions selected are passed to the engine, in which they are associated to corresponding scripts describing the physical realisation of the action (including specific animations). Our first experimental prototype is based on a similar plot to Lugrin s Death Kitchen interactive narrative [2]. The overall plot consists in having the virtual character carry out everyday tasks in the kitchen where there is a great potential for dangerous tasks to take place. Unlike Lugrin s system which is based on emergent narrative paradigm, our prototype supports the specification of the narrative via the description of the virtual character s behaviour using a plan-based representation of everyday activities. The influence on the interactive storytelling engine comes from the emotional feedback portrayed by the user visualising the plot unfolding on the screen. For instance, the virtual character is about to carry out a dangerous task in the kitchen, such as walking over a spillage on the floor. This dangerous situation is highlighted by the expressive virtual agent by playing the animations of the appropriate facial expression generated in real-time using our Java-based software interface which translates the information provided by the interactive storytelling engine into the appropriate APML commands using XSLT. The reaction from the user can be to warn the virtual character by shouting utterances such as Oh no!, Ah no!, Oh my god!, which are interpreted by the multi-keyword spotting component as a cautionary utterance. The EmoVoice component analyses the acoustic features of the utterance to recognise the emotional aspects of speech. This component incorporates so far three emotional classifications: Neutral, PositiveActive, and NegativePassive. The level of arousal (PositiveActive) defined from the user s utterance generates a high value of influence on the narrative engine by means of a dynamic change in the heuristic value. The remaining planning process

is then influenced by the modified heuristic steering the subsequent selection of tasks towards a less dangerous set of situations. 3 Conclusion We have described a first proof-of-concept implementation of our system, whose purpose was mostly to validate the concept of feedback loop and experiment with the various constraints on the system s response times. Such a prototype would not be able to support user reactions to narrative instantaneous events (fall of an object, impact of a missile) unless these are somehow announced or the action is artificially slowed down. We are however devising mechanisms for progressive tension generation that would be able to announce events of intense narrative significance before these are actually generated by the system. This would in turn make possible to process the user s emotional reaction to actually have an influence of the story unfolding, rather than just record emotions and reactions a posteriori. Acknowledgements This work has been funded in part by the EU via the CALLAS Integrated Project (ref. 034800, http://www.callas-newmedia.eu/) References 1. M. Cavazza, F. Charles, and S.J. Mead, Character-based Interactive Storytelling, IEEE Intelligent Systems, special issue on AI in Interactive Entertainment, pp. 17-24, 2002. 2. J-L. Lugrin and M. Cavazza, AI-based world behaviour for emergent narratives, in Proceedings of the ACM Advances in Computer Entertainment Technology, Los Angeles, USA, 2006. 3. M. Mateas, and A. Stern, Natural Language Understanding in Façade: Surfacetext Processing, in Proceedings of the 2nd Technologies for Interactive Digital Storytelling and Entertainment (TIDSE 04), Darmstadt, Germany, 2004. 4. I.Poggi, C.Pelachaud, F. de Rosis, V. Carofiglio, B. De Carolis, GRETA. A Believable Embodied Conversational Agent, in O. Stock and M. Zancarano, eds, Multimodal Intelligent Information Presentation, Kluwer, 2005. 5. J. Wagner, T. Vogt, E. André, A Systematic Comparison of Different HMM Designs for Emotion Recognition from Acted and Spontaneous Speech, ACII 2007, pp. 114-125, 2007. 6. Y.G. Cheong, and R.M. Young, A Computational Model of Narrative Generation for Suspense, AAAI 2006 Computational Aesthetic Workshop, Boston, MA, USA, 2006.