Title
Delving into egocentric actions
Abstract
We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.
Year
DOI
Venue
2015
10.1109/CVPR.2015.7298625
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
Field
DocType
camera wearer action recognition,egocentric camera,egocentric videos,head movement,hand pose,gaze information,mid-level egocentric cues,egocentric action recognition,motion feature,object feature
Computer vision,ENCODE,Gaze,Computer science,Action recognition,Artificial intelligence
Conference
Volume
Issue
ISSN
2015
1
1063-6919
Citations 
PageRank 
References 
46
1.03
33
Authors
3
Name
Order
Citations
PageRank
Yin Li179735.85
Zhefan Ye21014.79
James M. Rehg35259474.66