Title
Composite conversation gesture synthesis using layered planning
Abstract
Automatic generation of gestures linked to conversation is required in CG animation and game creation. When multiple gestures are combined in the conventional method, continuity or matching at the motion level is not considered. As a result, a machinelike impression is produced, such as a return to the upright position each time a gesture is performed. To deal with this problem, this paper proposes a method of composite gesture generation which considers continuity of motion and linkage of gestures and generates more natural conversational motions. As the first step in the proposed method, two different networks that represent the continuity and linkage of gestures are constructed. Then the combinations of gestures which are likely for a given conversational sentence are expanded as a tree, and a plan for the series of conversation motions is generated by evaluating the conversational content and gesture continuity. In the proposed method, composite gestures can be generated with allowance for continuity of motion. Consequently, more humanlike conversational motions can be generated while preserving gestures which are important in transmitting the semantics of the conversation, such as nodding while pointing, or scratching the head with one hand while placing the other hand on the hip. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(10): 58–68, 2007; Published online in Wiley InterScience (). DOI 10.1002/scj.20532
Year
DOI
Venue
2007
10.1002/scj.v38:10
Systems and Computers in Japan
Keywords
Field
DocType
linkage
Conversation,Impression,Computer science,Gesture,Speech recognition,Computer animation,Gesture synthesis,Sentence,Semantics
Journal
Volume
Issue
Citations 
38
10
2
PageRank 
References 
Authors
0.43
8
2
Name
Order
Citations
PageRank
Atsushi Nakano120.43
Junichi Hoshino218147.34