Title
A multiparty multimodal architecture for realtime turntaking
Abstract
Many dialogue systems have been built over the years that address some subset of the many complex factors that shape the behavior of participants in a face-to-face conversation. The Ymir Turntaking Model (YTTM) is a broad computational model of conversational skills that has been in development for over a decade, continuously growing in the number of factors it addresses. In past work we have shown how it addresses realtime dialogue, communicative gesture, perception of turntaking signals (e.g. prosody, gaze, manual gesture), dialogue planning, learning of multimodal turn signals, and dynamic adaptation to human speaking style. The architectural principles of the YTTM prescribe smaller architectural granularity than most other models, and its principles allow non-destructive additive expansion. In this paper we show how the YTTM accommodates multi-party dialogue. The extension has been implemented in a virtual environment; we present data for up to 12 simulated participants participating in realtime cooperative dialogue. The system includes dynamically adjustable parameters for impatience, willingness to give turn and eagerness to speak.
Year
DOI
Venue
2010
10.1007/978-3-642-15892-6_37
IVA
Keywords
Field
DocType
multi-party dialogue,multimodal turn signal,realtime cooperative dialogue,realtime dialogue,realtime turntaking,smaller architectural granularity,communicative gesture,architectural principle,manual gesture,dialogue system,dialogue planning,multiparty multimodal architecture,computer model,architecture,multimodal,virtual environment
Prosody,Architecture,Virtual machine,Conversation,Gaze,Computer science,Gesture,Multimedia,Speaking style,Perception
Conference
Volume
ISSN
ISBN
6356
0302-9743
3-642-15891-9
Citations 
PageRank 
References 
8
0.57
6
Authors
4
Name
Order
Citations
PageRank
Kristinn Thorisson165894.55
Olafur Gislason280.57
Gudny Ragna Jonsdottir3646.15
Hrafn Th. Thorisson480.57