Title
Dynamically Visual Disambiguation of Keyword-based Image Search.
Abstract
Due to the high cost of manual annotation, learning directly from the web has attracted broad attention. One issue that limits their performance is the problem of visual polysemy. To address this issue, we present an adaptive multi-model framework that resolves polysemy by visual disambiguation. Compared to existing methods, the primary advantage of our approach lies in that our approach can adapt to the dynamic changes in the search results. Our proposed framework consists of two major steps: we first discover and dynamically select the text queries according to the image search results, then we employ the proposed saliency-guided deep multi-instance learning network to remove outliers and learn classification models for visual disambiguation. Extensive experiments demonstrate the superiority of our proposed approach.
Year
DOI
Venue
2019
10.24963/ijcai.2019/140
IJCAI
Field
DocType
Volume
Information retrieval,Computer science,Manual annotation,Outlier,Polysemy,Learning network
Journal
abs/1905.10955
Citations 
PageRank 
References 
0
0.34
0
Authors
9
Name
Order
Citations
PageRank
Yazhou Yao18616.61
Zeren Sun272.47
Fumin Shen3186891.49
Li Liu463447.50
LiMin Wang581648.41
Fan Zhu649229.38
Lizhong Ding7318.36
Gang-Shan Wu8276.75
Ling Shao942946.73