Title
Spatio-Temporal Dynamics And Semantic Attribute Enriched Visual Encoding For Video Captioning
Abstract
Automatic generation of video captions is a fundamental challenge in computer vision. Recent techniques typically employ a combination of Convolutional Neural Networks (CNNs) and Recursive Neural Networks (RNNs) for video captioning. These methods mainly focus on tailoring sequence learning through RNNs for better caption generation, whereas off-the-shelf visual features are borrowed from CNNs. We argue that careful designing of visual features for this task is equally important, and present a visual feature encoding technique to generate semantically rich captions using Gated Recurrent Units (GRUs). Our method embeds rich temporal dynamics in visual features by hierarchically applying Short Fourier Transform to CNN features of the whole video. It additionally derives high level semantics from an object detector to enrich the representation with spatial dynamics of the detected objects. The final representation is projected to a compact space and fed to a language model. By learning a relatively simple language model comprising two GRU layers, we establish new state-of-the-art on MSVD and MSR-VTT datasets for METEOR and ROUGE(L) metrics.
Year
DOI
Venue
2019
10.1109/CVPR.2019.01277
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Closed captioning,Pattern recognition,Computer science,Convolutional neural network,Artificial intelligence,Artificial neural network,Sequence learning,Recursion,Semantics,Language model,Encoding (memory)
Journal
abs/1902.10322
ISSN
Citations 
PageRank 
1063-6919
9
0.45
References 
Authors
0
5
Name
Order
Citations
PageRank
Nayyer Aafaq1130.84
Naveed Akhtar2233.98
Wei Liu3144.57
Syed Zulqarnain Gilani4263.69
A. Mian5167984.89