Title
Attention-based Multimodal Feature Fusion for Dance Motion Generation
Abstract
ABSTRACT Recent advances in deep learning have enabled the extraction of high-level skeletal features from raw images and video sequences, paving the way for new possibilities in a variety of artificial intelligence tasks, including automatically synthesized human motion sequences. In this paper we present a system that combines 2D skeletal data and musical information to generate skeletal dancing sequences. The architecture is implemented solely with convolutional operations and trained by following a teacher-force supervised learning approach, while the synthesis of novel motion sequences follows an autoregressive process. Additionally, by employing an attention mechanism we fuse the latent representations of past music and motion information in order to condition the generation process. For assessing the system performance, we generated 900 sequences and evaluated the perceived realism, motion diversity and multimodality of the generated sequences based on various diversity metrics.
Year
DOI
Venue
2021
10.1145/3462244.3479961
Multimodal Interfaces and Machine Learning for Multimodal Interaction
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Kosmas Kritsis100.34
Aggelos Gkiokas2354.64
Aggelos Pikrakis3353.01
Vassilios Katsouros47310.63