Title
3D Action Recognition Exploiting Hierarchical Deep Feature Fusion Model
Abstract
Numerous existing handcrafted feature-based and conventional machine learning-based approaches cannot seize the intensive correlations of skeleton structure in the spatiotemporal dimension. On another hand, some modern methods exploiting Long Short Term Memory (LSTM) to learn temporal action attribute, which lack an efficient scheme of revealing high-level informative features. To handle the aforementioned issues, this research introduces a novel hierarchical deep feature fusion model for 3D skeleton-based human action recognition, in which the deep information for modeling human appearance and action dynamic is gained by Convolutional Neural Networks (CNNs). The deep features of geometrical joint distance and orientation are extracted via a multi-stream CNN architecture to uncovering the hidden correlations in both the spatial and temporal dimensions. The experimental results on the NTU RGB+D dataset demonstrates the superiority of the proposed fusion model against several recently deep learning (DL)-based action recognition approaches.
Year
DOI
Venue
2020
10.1109/IMCOM48794.2020.9001766
2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM)
Keywords
Field
DocType
Human action recognition,geometric feature,deep feature fusion,convolutional neural network
Feature fusion,Pattern recognition,Computer science,Convolutional neural network,Action recognition,Long short term memory,Real-time computing,Artificial intelligence,RGB color model,Deep learning
Conference
ISSN
ISBN
Citations 
2644-0164
978-1-7281-5454-1
0
PageRank 
References 
Authors
0.34
7
6
Name
Order
Citations
PageRank
Thien Huynh-The19421.54
Cam-Hao Hua24511.22
Nguyen Anh Tu3307.90
Jae-Woo Kim400.34
Seung-Hwan Kim500.34
Dong-Seong Kim66428.80