Title
A Deep Learning Framework For Recognizing Both Static And Dynamic Gestures
Abstract
Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network-StaDNet. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in StaDNet is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the Praxis gestures dataset, and the obtained results also outscore the state-of-the-art on this dataset.
Year
DOI
Venue
2021
10.3390/s21062227
SENSORS
Keywords
DocType
Volume
gestures recognition, operator interfaces, human activity recognition, commercial robots and applications, cyber-physical systems
Journal
21
Issue
ISSN
Citations 
6
1424-8220
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Mazhar Osama100.34
Sofiane Ramdani2105.10
Cherubini Andrea300.34