Title
Leveraging Auxiliary Image Descriptions For Dense Video Captioning
Abstract
Collecting textual descriptions is an especially costly task for dense video captioning, since each event in the video needs to be annotated separately and a long descriptive paragraph needs to be provided. In this paper, we investigate a way to mitigate this heavy burden and propose to leverage captions of visually similar images as auxiliary context. Our model successfully fetches visually relevant images and combines noun and verb phrases from their captions to generating coherent descriptions. To this end, we use a generator and discriminator design, together with an attention-based fusion technique, to incorporate image captions as context in the video caption generation process. The experiments on the challenging ActivityNet Captions dataset demonstrate that our proposed approach achieves more accurate and more diverse video descriptions compared to the strong baseline using METEOR, BLEU and CIDEr-D metrics and qualitative evaluations.(c) 2021 Published by Elsevier B.V.
Year
DOI
Venue
2021
10.1016/j.patrec.2021.02.009
PATTERN RECOGNITION LETTERS
Keywords
DocType
Volume
Video captioning, Adversarial training, Attention
Journal
146
ISSN
Citations 
PageRank 
0167-8655
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Emre Boran100.34
Aykut Erdem200.34
Nazli Ikizler-Cinbis347629.07
Erkut Erdem457333.86
Pranava Swaroop Madhyastha52410.59
lucia specia61217122.84