Title
General Facial Representation Learning in a Visual-Linguistic Manner
Abstract
How to learn a universal facial representation that boosts all face analysis tasks? This paper takes one step toward this goal. In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general facial representation learning. On one hand, the framework involves a contrastive loss to learn high-level semantic meaning from image-text pairs. On the other hand, we propose exploring low-level information simultaneously to further enhance the face representation by adding a masked image modeling. We perform pre-training on LAION-FACE, a dataset containing a large amount of face image-text pairs, and evaluate the representation capability on multiple downstream tasks. We show that FaRL achieves better transfer performance compared with previous pre-trained models. We also verify its superiority in the low-data regime. More importantly, our model surpasses the state-of-the-art methods on face analysis tasks including face parsing and face alignment.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01814
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Face and gestures, Representation learning, Self-& semi-& meta- Transfer/low-shot/long-tail learning, Vision + language
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
Yinglin Zheng100.34
Hao Yang2334.92
Ting Zhang326610.10
Jianmin Bao4225.76
Dongdong Chen55219.10
Yangyu Huang600.34
Lu Yuan780148.29
Dong Chen868132.51
Ming Zeng900.68
Fang Wen10207786.88