Title
Cross-Modal Deep Face Normals With Deactivable Skip Connections
Abstract
We present an approach for estimating surface normals from in-the-wild color images of faces. While data-driven strategies have been proposed for single face images, limited available ground truth data makes this problem difficult. To alleviate this issue, we propose a method that can leverage all available image and normal data, whether paired or not, thanks to a novel cross-modal learning architecture. In particular, we enable additional training with single modality data, either color or normal, by using two encoder-decoder networks with a shared latent space. The proposed architecture also enables face details to be transferred between the image and normal domains, given paired data, through skip connections between the image encoder and normal decoder. Core to our approach is a novel module that we call deactivable skip connections, which allows integrating both the auto-encoded and image-to-normal branches within the same architecture that can be trained end-to-end. This allows learning of a rich latent space that can accurately capture the normal information. We compare against state-of-the-art methods and show that our approach can achieve significant improvements, both quantitative and qualitative, with natural face images.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.00503
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
DocType
ISSN
Citations 
Conference
1063-6919
1
PageRank 
References 
Authors
0.35
0
4
Name
Order
Citations
PageRank
Victoria Fernández Abrevaya131.73
Adnane Boukhayma2212.60
Philip H. S. Torr39140636.18
Edmond Boyer42758130.84