Title
Lifelike Gesture Synthesis and Timing for Conversational Agents
Abstract
Synchronization of synthetic gestures with speech output is one of the goals for embodied conversational agents which have become a new paradigm for the study of gesture and for human-computer interface. In this context, this contribution presents an operational model that enables lifelike gesture animations of an articulated figure to be rendered in real-time from representations of spatiotemporal gesture knowledge. Based on various findings on the production of human gesture, the model provides means for motion representation, planning, and control to drive the kinematic skeleton of a figure which comprises 43 degrees of freedom in 29 joints for the main body and 20 DOF for each hand. The model is conceived to enable cross-modal synchrony with respect to the coordination of gestures with the signal generated by a text-to-speech system.
Year
DOI
Venue
2001
10.1007/3-540-47873-6_13
Gesture Workshop
Keywords
Field
DocType
kinematic skeleton,human-computer interface,human gesture,spatiotemporal gesture knowledge,cross-modal synchrony,conversational agent,articulated figure,lifelike gesture animation,lifelike gesture synthesis,conversational agents,operational model,synthetic gesture
Speech synthesis,Motion control,Synchronization,Kinematics,Computer science,Gesture,Gesture recognition,Embodied cognition,Speech recognition,Gesture synthesis
Conference
ISBN
Citations 
PageRank 
3-540-43678-2
9
1.00
References 
Authors
10
2
Name
Order
Citations
PageRank
ipke wachsmuth11053121.65
Stefan Kopp270158.13