Abstract | ||
---|---|---|
How to learn a universal facial representation that boosts all face analysis tasks? This paper takes one step toward this goal. In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general facial representation learning. On one hand, the framework involves a contrastive loss to learn high-level semantic meaning from image-text pairs. On the other hand, we propose exploring low-level information simultaneously to further enhance the face representation by adding a masked image modeling. We perform pre-training on LAION-FACE, a dataset containing a large amount of face image-text pairs, and evaluate the representation capability on multiple downstream tasks. We show that FaRL achieves better transfer performance compared with previous pre-trained models. We also verify its superiority in the low-data regime. More importantly, our model surpasses the state-of-the-art methods on face analysis tasks including face parsing and face alignment. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPR52688.2022.01814 | IEEE Conference on Computer Vision and Pattern Recognition |
Keywords | DocType | Volume |
Face and gestures, Representation learning, Self-& semi-& meta- Transfer/low-shot/long-tail learning, Vision + language | Conference | 2022 |
Issue | Citations | PageRank |
1 | 0 | 0.34 |
References | Authors | |
0 | 10 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yinglin Zheng | 1 | 0 | 0.34 |
Hao Yang | 2 | 33 | 4.92 |
Ting Zhang | 3 | 266 | 10.10 |
Jianmin Bao | 4 | 22 | 5.76 |
Dongdong Chen | 5 | 52 | 19.10 |
Yangyu Huang | 6 | 0 | 0.34 |
Lu Yuan | 7 | 801 | 48.29 |
Dong Chen | 8 | 681 | 32.51 |
Ming Zeng | 9 | 0 | 0.68 |
Fang Wen | 10 | 2077 | 86.88 |