Title
Leveraging Video Descriptions to Learn Video Question Answering.
Abstract
We propose a scalable approach to learn video-based question answering (QA): to answer a free-form natural language question about the contents of a video. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended from MN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), and SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.
Year
Venue
DocType
2017
THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE
Conference
Volume
Citations 
PageRank 
abs/1611.04021
2
0.36
References 
Authors
0
6
Name
Order
Citations
PageRank
Kuo-Hao Zeng1222.36
Tseng-Hung Chen2392.04
Ching-Yao Chuang3212.70
Yuan-Hong Liao4552.94
Juan Carlos Niebles5148580.45
Min Sun6108359.15