Title
From captions to visual concepts and back.
Abstract
This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.
Year
DOI
Venue
2014
10.1109/CVPR.2015.7298754
computer vision and pattern recognition
DocType
Volume
Issue
Journal
abs/1411.4952
1
ISSN
Citations 
PageRank 
1063-6919
300
10.35
References 
Authors
54
12
Search Limit
100300
Name
Order
Citations
PageRank
Fang Hao1120963.73
Saurabh Gupta2143153.13
Forrest N. Iandola335217.25
Rupesh Kumar Srivastava482344.48
Deng, Li59691728.14
Piotr Dollár67999307.07
Jianfeng Gao75729296.43
Xiaodong He83858190.28
Margaret Mitchell9145065.37
John Platt1066111100.14
C. Lawrence Zitnick117321332.72
Geoffrey Zweig123406320.25