Title
Automatic annotation and retrieval for videos
Abstract
Retrieving videos by key words requires semantic knowledge of the videos. However, manual video annotation is very costly and time consuming. Most works reported in literatures focus on annotating a video shot with either only one semantic concept or a fixed number of words. In this paper, we propose a new approach to automatically annotate a video shot with a varied number of semantic concepts and to retrieve videos based on text queries. First, a simple but efficient method is presented to automatically extract Semantic Candidate Set (SCS) for a video shot based on visual features. Second, a semantic network with n nodes is built by an Improved Dependency Analysis Based Method (IDABM) which reduce the time complexity of orienting the edges from O(n4) to O(n2). Third, the final annotation set (FAS) is obtained from SCS by Bayesian Inference. Finally, a new way is proposed to rank the retrieved key frames according to the probabilities obtained during Bayesian Inference. Experiments show that our method is useful in automatically annotating video shots and retrieving videos by key words.
Year
DOI
Venue
2006
10.1007/11949534_103
PSIVT
Keywords
Field
DocType
manual video annotation,bayesian inference,semantic network,key word,retrieving video,semantic knowledge,annotating video shot,automatic annotation,key frame,video shot,semantic concept,dependence analysis,time complexity
Semantic memory,Computer vision,Annotation,Bayesian inference,Information retrieval,Computer science,Image retrieval,Image processing,Semantic network,Bayesian network,Artificial intelligence,Semantics
Conference
Volume
ISSN
ISBN
4319
0302-9743
3-540-68297-X
Citations 
PageRank 
References 
1
0.37
7
Authors
4
Name
Order
Citations
PageRank
Fangshi Wang1214.74
De Xu215813.08
Wei Lu36617.41
Hongli Xu450285.92