Title
Moving Fast And Slow: Analysis Of Representations And Post-Processing In Speech-Driven Automatic Gesture Generation
Abstract
This paper presents a novel framework for speech-driven gesture production, applicable to virtual agents to enhance human-computer interaction. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations. We also analyze the importance of smoothing of the produced motion. Our results indicated that the proposed method improved on our baseline in terms of objective measures. For example, it better captured the motion dynamics and better matched the motion-speed distribution. Moreover, we performed user studies on two different datasets. The studies confirmed that our proposed method is perceived as more natural than the baseline, although the difference in the studies was eliminated by appropriate post-processing: hip-centering and smoothing. We conclude that it is important to take both motion representation and post-processing into account when designing an automatic gesture-production method.
Year
DOI
Venue
2021
10.1080/10447318.2021.1883883
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION
DocType
Volume
Issue
Journal
37
14
ISSN
Citations 
PageRank 
1044-7318
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Taras Kucherenko121.71
Dai Hasegawa2267.62
Naoshi Kaneko301.69
Gustav Eje Henter43711.40
hedvig kjellstrom549142.24