Title
STAR-Net: Action Recognition using Spatio-Temporal Activation Reprojection
Abstract
While depth cameras and inertial sensors have been frequently leveraged for human action recognition, these sensing modalities are impractical in many scenarios where cost or environmental constraints prohibit their use. As such, there has been recent interest on human action recognition using low-cost, readily-available RGB cameras via deep convolutional neural networks. However, many of the deep convolutional neural networks proposed for action recognition thus far have relied heavily on learning global appearance cues directly from imaging data, resulting in highly complex network architectures that are computationally expensive and difficult to train. Motivated to circumvent the challenges associated with training complex network architectures, we introduce the concept of spatio-temporal activation reprojection (STAR). More specifically, we reproject the spatio-temporal activations generated by human pose estimation layers in space and time using a stack of 3D convolutions. Experimental results on UTD-MHAD and J-HMDB demonstrate that an end-to-end architecture based on the proposed STAR framework (which we nickname STAR-Net) is proficient in single-environment and small-scale applications. On UTD-MHAD, STAR-Net outperforms several methods using richer data modalities such as depth and inertial sensors.
Year
DOI
Venue
2019
10.1109/CRV.2019.00015
2019 16th Conference on Computer and Robot Vision (CRV)
Keywords
Field
DocType
action recognition,convolutional neural network,spatio temporal,3D convolution,human pose estimation
Modalities,Network complexity,Pattern recognition,Convolutional neural network,Convolution,Computer science,Pose,RGB color model,Complex network,Inertial measurement unit,Artificial intelligence
Journal
Volume
ISBN
Citations 
abs/1902.10024
978-1-7281-1839-0
1
PageRank 
References 
Authors
0.34
29
3
Name
Order
Citations
PageRank
William J. McNally110.34
Alexander Wong235169.61
J.J. McPhee3135.80