Title
MoCaNet: Motion Retargeting In-the-Wild via Canonicalization Networks.
Abstract
We present a novel framework that brings the 3D motion retargeting task from controlled environments to in-the-wild scenarios. In particular, our method is capable of retargeting body motion from a character in a 2D monocular video to a 3D character without using any motion capture system or 3D reconstruction procedure. It is designed to leverage massive online videos for unsupervised training, needless of 3D annotations or motion-body pairing information. The proposed method is built upon two novel canonicalization operations, structure canonicalization and view canonicalization. Trained with the canonicalization operations and the derived regularizations, our method learns to factorize a skeleton sequence into three independent semantic subspaces, i.e., motion, structure, and view angle. The disentangled representation enables motion retargeting from 2D to 3D with high precision. Our method achieves superior performance on motion transfer benchmarks with large body variations and challenging actions. Notably, the canonicalized skeleton sequence could serve as a disentangled and interpretable representation of human motion that benefits action analysis and motion retrieval.
Year
Venue
Keywords
2022
AAAI Conference on Artificial Intelligence
Computer Vision (CV),Domain(s) Of Application (APP),Humans And AI (HAI)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Wentao Zhu111.02
Zhuoqian Yang232.41
Ziang Di300.34
Wenyan Wu4197.34
Yizhou Wang5116286.04
Chen Change Loy64484178.56