Title
Boost Image Captioning With Knowledge Reasoning
Abstract
Automatically generating a human-like description for a given image is a potential research in artificial intelligence, which has attracted a great of attention recently. Most of the existing attention methods explore the mapping relationships between words in sentence and regions in image, such unpredictable matching manner sometimes causes inharmonious alignments that may reduce the quality of generated captions. In this paper, we make our efforts to reason about more accurate and meaningful captions. We first propose word attention to improve the correctness of visual attention when generating sequential descriptions word-by-word. The special word attention emphasizes on word importance when focusing on different regions of the input image, and makes full use of the internal annotation knowledge to assist the calculation of visual attention. Then, in order to reveal those incomprehensible intentions that cannot be expressed straightforwardly by machines, we introduce a new strategy to inject external knowledge extracted from knowledge graph into the encoder-decoder framework to facilitate meaningful captioning. Finally, we validate our model on two freely available captioning benchmarks: Microsoft COCO dataset and Flickr30k dataset. The results demonstrate that our approach achieves state-of-the-art performance and outperforms many of the existing approaches.
Year
DOI
Venue
2020
10.1007/s10994-020-05919-y
MACHINE LEARNING
Keywords
DocType
Volume
Image captioning, Word attention, Visual attention, Knowledge graph, Reinforcement learning
Journal
109
Issue
ISSN
Citations 
12
0885-6125
1
PageRank 
References 
Authors
0.35
0
5
Name
Order
Citations
PageRank
Feicheng Huang141.81
Zhixin Li21219.62
Haiyang Wei341.11
Canlong Zhang458.55
Huifang Ma529029.69