Title
Fused GRU with semantic-temporal attention for video captioning
Abstract
The encoder-decoder framework has been widely used for video captioning to achieve promising results, and various attention mechanisms are proposed to further improve the performance. While temporal attention determines where to look, semantic decides the context. However, the combination of semantic and temporal attention has never be exploited for video captioning. To tackle this issue, we propose an end-to-end pipeline named Fused GRU with Semantic-Temporal Attention (STA-FG), which can explicitly incorporate the high-level visual concepts to the generation of semantic-temporal attention for video captioning. The encoder network aims to extract visual features from the videos and predict their semantic concepts, while the decoder network is focusing on efficiently generating coherent sentences using both visual features and semantic concepts. Specifically, the decoder combines both visual and semantic representation, and incorporates a semantic and temporal attention mechanism in a fused GRU network to accurately learn the sentences for video captioning. We experimentally evaluate our approach on the two prevalent datasets MSVD and MSR-VTT, and the results show that our STA-FG achieves the currently best performance on both BLEU and METEOR.
Year
DOI
Venue
2020
10.1016/j.neucom.2018.06.096
Neurocomputing
Keywords
DocType
Volume
Video captioning,GRU,Encoder-decoder,Attention mechanism
Journal
395
ISSN
Citations 
PageRank 
0925-2312
4
0.42
References 
Authors
0
4
Name
Order
Citations
PageRank
Lianli Gao155042.85
Xuanhan Wang21035.79
Jingkuan Song3197077.76
Yang Li4659125.00