Abstract | ||
---|---|---|
Human activity recognition is an important yet challenging research topic in the computer vision community. In this paper, we propose context features along with a deep model to recognize the individual subject activity in the videos of real-world scenes. Besides the motion features of the subject, we also utilize context information from multiple sources to improve the recognition performance. We introduce the scene context features that describe the environment of the subject at global and local levels. We design a deep neural network structure to obtain the high-level representation of human activity combining both motion features and context features. We demonstrate that the proposed context feature and deep model improve the activity recognition performance by comparing with baseline approaches. We also show that our approach outperforms state-of-the-art methods on 5-activities and 6-activities versions of the Collective Activities Dataset. |
Year | DOI | Venue |
---|---|---|
2017 | 10.5220/0006099500340043 | PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5 |
Keywords | Field | DocType |
Activity Recognition, Deep Learning, Context Information | Contextual information,Activity recognition,Pattern recognition,Computer science,Artificial intelligence,Deep learning,Artificial neural network,Machine learning | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Li Wei | 1 | 18 | 1.69 |
Shishir K Shah | 2 | 501 | 40.08 |