Title | ||
---|---|---|
PeMapNet: Action Recognition from Depth Videos Using Pyramid Energy Maps on Neural Networks |
Abstract | ||
---|---|---|
We propose an integrated approach to human action recognition from a depth video. The two major contributions of this approach are a novel feature descriptor for depth videos and the corresponding deep learning neural network structures. In this paper, we first present pyramid energy Maps (PeMaps) as the feature descriptor for a sequence of frames in a depth video. The pyramid structure is able to present the history of an action. Furthermore, PeMaps uses the levels of energy to carry the spatial dynamics of actions in a depth video. We then design PeMapNet that applies convolution neural networks and bidirectional long-short term memory (BLSTM) recurrent neural networks to PeMaps for action recognition. We evaluate our approach on three challenging datasets including MSR-Action3D, UTKinect-Action3D and MSR-Gesture3D. The experimental results demonstrate that our approach obtained higher accuracy than most of the existing methods and advance in efficiency. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/ICTAI.2017.00024 | 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI) |
Keywords | Field | DocType |
action recognition,depth videos,deep learning,long short term memory | Feature descriptor,Pattern recognition,Computer science,Convolution,Action recognition,Recurrent neural network,Feature extraction,Pyramid,Artificial intelligence,Deep learning,Artificial neural network | Conference |
ISSN | ISBN | Citations |
1082-3409 | 978-1-5386-3877-4 | 0 |
PageRank | References | Authors |
0.34 | 24 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiahao Li | 1 | 0 | 0.34 |
Hejun Wu | 2 | 242 | 23.03 |
Xinrui Zhou | 3 | 0 | 0.34 |