Title
GestureGAN for Hand Gesture-to-Gesture Translation in the Wild.
Abstract
Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator G and a discriminator D, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of "channel pollution" while back-propagating the gradients. In addition, we present the Frechet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https://github.com/Ha0Tang/GestureGAN.
Year
DOI
Venue
2018
10.1145/3240508.3240704
MM '18: ACM Multimedia Conference Seoul Republic of Korea October, 2018
Keywords
DocType
Volume
Generative Adversarial Networks, Image Translation, Hand Gesture
Conference
abs/1808.04859
ISBN
Citations 
PageRank 
978-1-4503-5665-7
4
0.40
References 
Authors
37
5
Name
Order
Citations
PageRank
Hao Tang133834.83
Wei Wang213114.16
Dan Xu334216.39
Yan Yan469131.13
Nicu Sebe57013403.03