Title
Constrained LSTM and Residual Attention for Image Captioning
Abstract
Visual structure and syntactic structure are essential in images and texts, respectively. Visual structure depicts both entities in an image and their interactions, whereas syntactic structure in texts can reflect the part-of-speech constraints between adjacent words. Most existing methods either use visual global representation to guide the language model or generate captions without considering the relationships of different entities or adjacent words. Thus, their language models lack relevance in both visual and syntactic structure. To solve this problem, we propose a model that aligns the language model to certain visual structure and also constrains it with a specific part-of-speech template. In addition, most methods exploit the latent relationship between words in a sentence and pre-extracted visual regions in an image yet ignore the effects of unextracted regions on predicted words. We develop a residual attention mechanism to simultaneously focus on the pre-extracted visual objects and unextracted regions in an image. Residual attention is capable of capturing precise regions of an image corresponding to the predicted words considering both the effects of visual objects and unextracted regions. The effectiveness of our entire framework and each proposed module are verified on two classical datasets: MSCOCO and Flickr30k. Our framework is on par with or even better than the state-of-the-art methods and achieves superior performance on COCO captioning Leaderboard.
Year
DOI
Venue
2020
10.1145/3386725
ACM Transactions on Multimedia Computing, Communications, and Applications
Keywords
DocType
Volume
Image captioning,LSTM,object detection,visual attention,visual skeleton
Journal
16
Issue
ISSN
Citations 
3
1551-6857
2
PageRank 
References 
Authors
0.38
0
4
Name
Order
Citations
PageRank
Liang Yang112042.20
Haifeng Hu227060.38
Songlong Xing362.49
Xinlong Lu420.38