Title
Learning a Mid-Level Representation for Multiview Action Recognition.
Abstract
Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition methods are supposed to have the same camera view during both training and testing. And thus performances of these single-view approaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above problem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based on multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step, subvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids sampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are built upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview subvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned representation. Experiments conducted on the multiview IXMAS action dataset illustrate that the proposed method can effectively recognize human actions depicted in multiview videos.
Year
DOI
Venue
2018
10.1155/2018/3508350
Adv. in MM
Field
DocType
Volume
Computer vision,Pattern recognition,Viewpoints,Computer science,Action recognition,Exploit,Artificial intelligence,Random forest,Discriminative model
Journal
2018
ISSN
Citations 
PageRank 
1687-5680
0
0.34
References 
Authors
17
4
Name
Order
Citations
PageRank
Cuiwei Liu111.36
Zhaokui Li2126.24
Xiang-bin Shi3114.57
Chong Du400.34