Title
You Talkin' to Me?: Recognizing Complex Human Interactions in Unconstrained Videos
Abstract
Nowadays, due to the exponential growth of the user generated videos and the prevailing videos sharing communities such as YouTube and Hulu, recognizing complex human activities in the wild becomes increasingly important in the research community. These videos are hard to study due to the frequent changes of camera viewpoint, multiple people moving in the scene, fast body movements, and varied lengths of video clips. In this paper, we propose a novel framework to analyze human interactions in TV shows. Firstly, we exploit the motion interchange pattern (MIP) to detect camera viewpoint changes in a video, and extract the salient motion points in the bounding box that covers the region of interest (ROI) in each frame. Then, we compute the large displacement optical flow for the salient pixels in the bounding box, and build the histogram of oriented optical flow as the motion feature vector for each frame. Finally, the self-similarity matrix (SSM) is adopted to capture the global temporal correlation of frames in a video. After extracting the SSM descriptors, the video feature vector can be constructed through different encoding approaches. The proposed framework works well in practice for unconstrained videos. We validate our approach on the TV human interaction (TVHI) dataset, and the experimental results demonstrate the efficacy of our strategy.
Year
DOI
Venue
2014
10.1145/2647868.2654996
ACM Multimedia 2001
Keywords
Field
DocType
applications,self-similarity matrix,large displacement optical flow,motion interchange pattern,activity recognition
Computer vision,Histogram,Feature vector,Activity recognition,Self-similarity matrix,Computer science,Pixel,Artificial intelligence,Region of interest,Multimedia,Optical flow,Minimum bounding box
Conference
Citations 
PageRank 
References 
1
0.39
13
Authors
4
Name
Order
Citations
PageRank
Bo Zhang161.81
Yan Yan269131.13
Nicola Conci314931.63
Nicu Sebe47013403.03