Title
Effective image semantic annotation by discovering visual-concept associations from image-concept distribution model
Abstract
Up to the present, the contemporary studies are not really successful in image annotation due to some critical problems like diverse regularities between visual features and human concepts. Such diverse regularities make it hard to annotate the image semantics correctly. In this paper, we propose a novel approach called AICDM (Annotation by Image-Concept Distribution Model) for image annotation by discovering the associations between visual features and human concepts from image-concept distribution. Through the proposed image-concept distribution model, the uncertain regularities between visual features and human concepts can be clarified for achieving high-quality image annotation. The empirical evaluation results also reveal that our proposed AICDM method can effectively alleviate the uncertain regularity problem and bring out better annotation results than other existing approaches in terms of precision and recall.
Year
DOI
Venue
2010
10.1109/ICME.2010.5582564
ICME
Keywords
Field
DocType
image semantic annotation,image semantics,visual-concept association,image annotation,tf-idf,image-concept distribution model,aicdm approach,feature extraction,image-concept distribution,image retrieval,entropy,content-based retrieval,support vector machines,tf idf,visualization,semantics,predictive models
Computer science,Image retrieval,Artificial intelligence,Computer vision,Automatic image annotation,Annotation,Information retrieval,tf–idf,Pattern recognition,Visualization,Precision and recall,Feature extraction,Semantics
Conference
ISSN
ISBN
Citations 
1945-7871
978-1-4244-7491-2
2
PageRank 
References 
Authors
0.39
14
4
Name
Order
Citations
PageRank
Ja-Hwung Su132924.53
Chien-Li Chou28610.09
Ching-yung Lin31963175.16
Vincent S. Tseng42923161.33