Abstract | ||
---|---|---|
Virtual conversational agents are supposed to combine speech with non-verbal modalities for intelligible and believable utterances. However, the automatic synthesis of co-verbal gestures is still struggling with several problems like naturalness in procedurally generated animations, flexibility in pre-defined movements, and synchronization with speech. In this paper we focus on generating complex multimodal utterances including gesture and speech from XML-based descriptions of their overt form. We describe a coordination model that reproduces coarticulation and transition effects in both modalities. In particular, an efficient kinematic approach to creating gesture animations from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross-modal synchrony |
Year | DOI | Venue |
---|---|---|
2002 | 10.1109/CA.2002.1017547 | Geneva |
Keywords | DocType | ISSN |
computer animation,software agents,synchronisation,user interfaces,virtual reality,xml based descriptions,coverbal gesture,kinematics,model-based animation,multimodal utterances,shape specifications,speech synchronization,speech-gesture coordination,virtual conversational agents,automatic control,animation,artificial intelligence,assembly,conversational agent,speech synthesis | Conference | 1087-4844 |
ISBN | Citations | PageRank |
0-7695-1594-0 | 29 | 3.38 |
References | Authors | |
12 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Stefan Kopp | 1 | 701 | 58.13 |
ipke wachsmuth | 2 | 1053 | 121.65 |