Title
Learning spatial-temporal deformable networks for unconstrained face alignment and tracking in videos.
Abstract
•In our approach, we propose a differential transformer module, which typically learns the deformable offsets to adaptively augment the receptive field. By doing this, we achieve shape-informative and robust feature representation versus conventional fixed-length filters for high-performance face alignment.•Extensively, we carefully develop a temporal relational reasoning module integrated in our DHGN. Consequently, it infers the temporal offsets to capture the meaningful ordinal relationship among frames, which reinforces the robustness of our model to temporal variations over time steps.•Experimentally, our DHGN consistently outperforms most existing face alignment methods on 300-W challengingset and COFW including large variations. Moreover, our proposed T-DHGN achieves smoothing performance in the difficult face tracking dataset, i.e., 300-VW Category Three.
Year
DOI
Venue
2020
10.1016/j.patcog.2020.107354
Pattern Recognition
Keywords
DocType
Volume
Face alignment,Face tracking,Spatial transformer,Relational reasoning,Video analysis,Biometrics
Journal
107
Issue
ISSN
Citations 
1
0031-3203
2
PageRank 
References 
Authors
0.37
0
5
Name
Order
Citations
PageRank
Hongyu Zhu120.37
Hao Liu211310.67
Congcong Zhu322.73
Zongyong Deng421.04
Xuehong Sun532.74