Title
GaitSet: Cross-View Gait Recognition Through Utilizing Gait As a Deep Set
Abstract
Gait is a unique biometric feature that can be recognized at a distance; thus, it has broad applications in crime prevention, forensic identification, and social security. To portray a gait, existing gait recognition methods utilize either a gait template which makes it difficult to preserve temporal information, or a gait sequence that maintains unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper, we present a novel perspective that utilizes gait as a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">deep set</i> , which means that a set of gait frames are integrated by a global-local fused deep network inspired by the way our left- and right-hemisphere processes information to learn information that can be used in identification. Based on this <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">deep set</i> perspective, our method is <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">immune to frame permutations</i> , and can naturally <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">integrate frames from different videos</i> that have been acquired under different scenarios, such as diverse viewing angles, different clothes, or different item-carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 96.1 percent on the CASIA-B gait dataset and an accuracy of 87.9 percent on the OU-MVLP gait dataset. Under various complex scenarios, our model also exhibits a high level of robustness. It achieves accuracies of 90.8 and 70.3 percent on CASIA-B under bag-carrying and coat-wearing walking conditions respectively, significantly outperforming the best existing methods. Moreover, the proposed method maintains a satisfactory accuracy even when only small numbers of frames are available in the test samples; for example, it achieves 85.0 percent on CASIA-B even when using only 7 frames. The source code has been released at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/AbnerHqC/GaitSet</uri> .
Year
DOI
Venue
2022
10.1109/TPAMI.2021.3057879
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords
DocType
Volume
Algorithms,Deep Learning,Gait,Software,Walking
Journal
44
Issue
ISSN
Citations 
7
0162-8828
1
PageRank 
References 
Authors
0.35
20
5
Name
Order
Citations
PageRank
Hanqing Chao183.16
Kun Wang210.35
Yiwei He3172.91
Junping Zhang4117359.62
Jianfeng Feng564688.67