Title
Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation
Abstract
Generating speech-consistent body and gesture movements is a long-standing problem in virtual avatar creation. Previous studies often synthesize pose movement in a holistic manner, where poses of all joints are generated simultaneously. Such a straightforward pipeline fails to generate fine-grained co-speech gestures. One observation is that the hierarchical semantics in speech and the hierarchical structures of human gestures can be naturally described into multiple granularities and associated together. To fully utilize the rich connections between speech audio and human gestures, we propose a novel framework named Hierarchical Audio-to-Gesture (HA2G) for co-speech gesture generation. In HA2G, a Hierarchical Audio Learner extracts audio representations across semantic granularities. A Hierarchical Pose Inferer subsequently renders the entire human pose gradually in a hierarchical manner. To enhance the quality of synthesized gestures, we develop a contrastive learning strategy based on audio-text alignment for better audio representations. Extensive experiments and human evaluation demonstrate that the proposed method renders realistic co-speech gestures and out-performs previous methods in a clear margin. Project page: https://alvinliu0.github.io/projects/HA2G.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01021
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Vision + X, Face and gestures
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
Xian Liu100.68
Qianyi Wu201.69
Hang Zhou3145.27
Yinghao Xu4122.29
Rui Qian502.37
Xinyi Lin622.46
Xiaowei Zhou749128.91
Wenyan Wu8197.34
Dai, Bo94310.87
Bolei Zhou10152966.96