Abstract | ||
---|---|---|
This paper introduces the use of annotation tags for human activity recognition in video. Recent methods in human activity recognition use more complex and realistic datasets obtained from TV shows or movies, which makes it difficult to obtain the high recognition accuracies. We improve the recognition accuracies using annotation tags of the video. Tags tend to be related to video contents, and human activity videos frequently contain tags relevant to their activities. We first collect a human activity dataset containing tags from YouTube. Under this dataset, we automatically discover relevant tags and their correlation with human activities. We finally develop a framework using visual content and tags for activity recognition. We show that our approach can improve recognition accuracies compared with other approaches that only use visual content. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1109/ICICS.2011.6173540 | ICICS |
Keywords | Field | DocType |
content management,object recognition,social networking (online),video communication,tv shows,youtube video,annotation tags,human activity recognition,human interaction recognition,visual content,youtube,human-human interaction recognition,tag,motion pictures,correlation,visualization,activity recognition,tv,human interaction | Annotation,Activity recognition,Information retrieval,Computer science,Visualization,Human interaction,Content management,Multimedia,Cognitive neuroscience of visual object recognition,Distributed computing | Conference |
ISBN | Citations | PageRank |
978-1-4577-0029-3 | 3 | 0.38 |
References | Authors | |
10 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sunyoung Cho | 1 | 8 | 1.47 |
seongho lim | 2 | 3 | 0.38 |
Hyeran Byun | 3 | 505 | 65.97 |
haejin park | 4 | 3 | 0.38 |
Sooyeong Kwak | 5 | 39 | 5.65 |