Abstract | ||
---|---|---|
In this paper, we propose a mathematical framework to model activities with both motion and context information for activity recognition. This is motivated from the observations that an activity does not only depend on the motion of the objects of interest but the surrounding objects also provide useful cues for an understanding of the activity. Thus the surrounding objects can serve as context for the concerned activity. Given training data, our model aims to automatically capture and weigh motion and context patterns for each activity class, from sets of predefined attributes, during the learning process. Then, the learned model is used to generate optimum labels for activities in the testing videos based on the motion and context features of these activities. We show how to learn the model parameters via an unconstrained convex optimization methodology and how to predict the correct label for a testing instance. We show promising results on the publicly available VIRAT Ground Dataset that demonstrates the benefit of modeling the surrounding context in recognizing activities in a wide-area scene. |
Year | DOI | Venue |
---|---|---|
2012 | 10.1145/2425333.2425364 | ICVGIP |
Keywords | Field | DocType |
concerned activity,context pattern,context information,activity class,spatial context,model activity,context feature,model parameter,surrounding context,activity recognition,surrounding object,bag of words,facial expression | Bag-of-words model,Training set,Computer vision,Activity recognition,Pattern recognition,Computer science,Facial expression,Artificial intelligence,Spatial contextual awareness,Convex optimization,Machine learning | Conference |
Citations | PageRank | References |
0 | 0.34 | 20 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yingying Zhu | 1 | 410 | 26.41 |
Nandita M. Nayak | 2 | 78 | 4.68 |
Amit K. Roy Chowdhury | 3 | 1153 | 73.96 |