Title
Effective Semantic Annotation by Image-to-Concept Distribution Model
Abstract
Image annotation based on visual features has been a difficult problem due to the diverse associations that exist between visual features and human concepts. In this paper, we propose a novel approach called Annotation by Image-to-Concept Distribution Model (AICDM) for image annotation by discovering the associations between visual features and human concepts from image-to-concept distribution. Through the proposed image-to-concept distribution model, visual features and concepts can be bridged to achieve high-quality image annotation. In this paper, we propose to use “visual features”, “models”, and “visual genes” which represent analogous functions to the biological chromosome, DNA, and gene. Based on the proposed models using entropy, tf-idf, rules, and SVM, the goal of high-quality image annotation can be achieved effectively. Our empirical evaluation results reveal that the AICDM method can effectively alleviate the problem of visual-to-concept diversity and achieve better annotation results than many existing state-of-the-art approaches in terms of precision and recall.
Year
DOI
Venue
2011
10.1109/TMM.2011.2129502
IEEE Transactions on Multimedia
Keywords
Field
DocType
Visualization,Entropy,Feature extraction,Support vector machines,Predictive models,Image color analysis,Semantics
Computer vision,Annotation,Automatic image annotation,Pattern recognition,tf–idf,Visualization,Computer science,Support vector machine,Precision and recall,Image retrieval,Feature extraction,Artificial intelligence
Journal
Volume
Issue
ISSN
13
3
1520-9210
Citations 
PageRank 
References 
18
0.73
20
Authors
4
Name
Order
Citations
PageRank
Ja-Hwung Su132924.53
Chien-Li Chou28610.09
Ching-yung Lin31963175.16
Vincent S. Tseng42923161.33