Title
Perceptual knowledge construction from annotated image collections
Abstract
This paper presents and evaluates new methods for extracting perceptual knowledge from collections of annotated images. The proposed methods include automatic techniques for constructing perceptual concepts by clustering the images based on visual and text feature descriptors, and for discovering perceptual relationships among the concepts based on descriptor similarity and statistics between the clusters. There are two main contributions of this work. The first lies on the support and the evaluation of several techniques for visual and text feature descriptor extraction, for visual and text feature descriptor integration, and for data clustering in the extraction of perceptual concepts. The second contribution is in proposing novel ways for discovering perceptual relationships among concepts. Experiments show extraction of useful knowledge from visual and text feature descriptors, high independence between visual and text feature descriptors, and potential performance improvement by integrating both kinds of descriptors compared to using either kind of descriptor alone.
Year
DOI
Venue
2002
10.1109/ICME.2002.1035750
Multimedia and Expo, 2002. ICME '02. Proceedings. 2002 IEEE International Conference  
Keywords
Field
DocType
feature extraction,image retrieval,knowledge acquisition,statistical analysis,visual databases,vocabulary,annotated image collections,data clustering,feature extraction,image clustering,multimedia retrieval,perceptual knowledge construction,perceptual knowledge extraction,text feature descriptors,visual feature descriptors
Pattern recognition,Computer science,Image retrieval,Feature extraction,Artificial intelligence,Cluster analysis,Vocabulary,Perception,Knowledge acquisition,Visual Word,Performance improvement
Conference
Volume
Citations 
PageRank 
1
14
1.08
References 
Authors
8
2
Name
Order
Citations
PageRank
Ana B. Benitez134232.32
Shih-Fu Chang2130151101.53