Title
AVLaughterCycle
Abstract
The AVLaughterCycle project aims at developing an audiovisual laughing machine, able to detect and respond to user’s laughs. Laughter is an important cue to reinforce the engagement in human-computer interactions. As a first step toward this goal, we have implemented a system capable of recording the laugh of a user and responding to it with a similar laugh. The output laugh is automatically selected from an audiovisual laughter database by analyzing acoustic similarities with the input laugh. It is displayed by an Embodied Conversational Agent, animated using the audio-synchronized facial movements of the subject who originally uttered the laugh. The application is fully implemented, works in real time and a large audiovisual laughter database has been recorded as part of the project. This paper presents AVLaughterCycle, its underlying components, the freely available laughter database and the application architecture. The paper also includes evaluations of several core components of the application. Objective tests show that the similarity search engine, though simple, significantly outperforms chance for grouping laughs by speaker or type. This result can be considered as a first measurement for computing acoustic similarities between laughs. A subjective evaluation has also been conducted to measure the influence of the visual cues on the users’ evaluation of similarity between laughs.
Year
DOI
Venue
2010
10.1007/s12193-010-0053-1
J. Multimodal User Interfaces
Keywords
DocType
Volume
Laughter,Embodied Conversational Agent,Acoustic similarity,Facial motion tracking
Journal
4
Issue
ISSN
Citations 
1
1783-7677
4
PageRank 
References 
Authors
0.46
10
9
Name
Order
Citations
PageRank
Jérôme Urbain114612.20
Radoslaw Niewiadomski241435.95
Elisabetta Bevacqua333728.51
Thierry Dutoit41006123.84
Alexis Moinet510313.48
Catherine Pelachaud62706279.88
Benjamin Picart7677.86
Joëlle Tilmanne810712.24
Johannes Wagner965449.55