Title
Contrastive Regression for Domain Adaptation on Gaze Estimation
Abstract
Appearance-based Gaze Estimation leverages deep neural networks to regress the gaze direction from monocular images and achieve impressive performance. However, its success depends on expensive and cumbersome annotation capture. When lacking precise annotation, the large domain gap hinders the performance of trained models on new domains. In this paper, we propose a novel gaze adaptation approach, namely Contrastive Regression Gaze Adaptation (CRGA), for generalizing gaze estimation on the target domain in an unsupervised manner. CRGA leverages the Contrastive Domain Generalization (CDG) module to learn the stable representation from the source domain and leverages the Contrastive Self-training Adaptation (CSA) module to learn from the pseudo labels on the target domain. The core of both CDG and CSA is the Contrastive Regression (CR) loss, a novel contrastive loss for regression by pulling features with closer gaze directions closer together while pushing features with farther gaze directions farther apart. Experimentally, we choose ETH-XGAZE and Gaze-360 as the source domain and test the domain generalization and adaptation performance on MPIIGAZE, RT-GENE, Gaze-Capture, EyeDiap respectively. The results demonstrate that our CRGA achieves remarkable performance improvement compared with the baseline models and also outperforms the state-of-the-art domain adaptation approaches on gaze adaptation tasks.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01877
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Representation learning, Face and gestures, Self-& semi-& meta- Vision + X
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Yaoming Wang100.34
Yangzhou Jiang200.34
Jin Li3414.28
Bingbing Ni4142182.90
Wenrui Dai56425.01
Chenglin Li611617.93
Hongkai Xiong751282.84
Teng Li833321.62