Abstract | ||
---|---|---|
With the increasing availability of wearable cameras, research on first-person view videos (egocentric videos) has received much attention recently. While some effort has been devoted to collecting various egocentric video datasets, there has not been a focused effort in assembling one that could capture the diversity and complexity of activities related to life-logging, which is expected to be an important application for egocentric videos. In this work, we first conduct a comprehensive survey of existing egocentric video datasets. We observe that existing datasets do not emphasize activities relevant to the life-logging scenario. We build an egocentric video dataset dubbed LENA (Life-logging EgoceNtric Activities) (http://people.sutd.edu.sg/similar to 1000892/dataset) which includes egocentric videos of 13 fine-grained activity categories, recorded under diverse situations and environments using the Google Glass. Activities in LENA can also be grouped into 5 top-level categories to meet various needs and multiple demands for activities analysis research. We evaluate state-of-the-art activity recognition using LENA in detail and also analyze the performance of popular descriptors in egocentric activity recognition. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1007/978-3-319-16634-6_33 | COMPUTER VISION - ACCV 2014 WORKSHOPS, PT III |
Field | DocType | Volume |
Computer vision,Activity recognition,Fisher vector,Computer science,Wearable computer,Artificial intelligence,Optical flow,Mixture model | Conference | 9010 |
ISSN | Citations | PageRank |
0302-9743 | 8 | 0.51 |
References | Authors | |
10 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sibo Song | 1 | 11 | 0.90 |
Vijay Chandrasekhar | 2 | 191 | 22.83 |
Ngai-Man Cheung | 3 | 750 | 67.36 |
Sanath Narayan | 4 | 8 | 0.51 |
Liyuan Li | 5 | 48 | 13.24 |
Joo-Hwee Lim | 6 | 783 | 82.45 |