Abstract | ||
---|---|---|
Dense video captioning is an extremely challenging task since accurate and coherent description of events in a video requires holistic understanding of video contents as well as contextual reasoning of individual events. Most existing approaches handle this problem by first detecting event proposals from a video and then captioning on a subset of the proposals. As a result, the generated sentences are prone to be redundant or inconsistent since they fail to consider temporal dependency between events. To tackle this challenge, we propose a novel dense video captioning framework, which models temporal dependency across events in a video explicitly and leverages visual and linguistic context from prior events for coherent storytelling. This objective is achieved by 1) integrating an event sequence generation network to select a sequence of event proposals adaptively, and 2) feeding the sequence of event proposals to our sequential video captioning network, which is trained by reinforcement learning with two-level rewards-at both event and episode levels-for better context modeling. The proposed technique achieves outstanding performances on ActivityNet Captions dataset in most metrics. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPR.2019.00675 | 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) |
Field | DocType | Volume |
Closed captioning,Storytelling,Computer science,Contextual reasoning,Context model,Artificial intelligence,Event sequence,Machine learning,Reinforcement learning | Journal | abs/1904.03870 |
ISSN | Citations | PageRank |
1063-6919 | 4 | 0.39 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jonghwan Mun | 1 | 30 | 3.24 |
Linjie Yang | 2 | 34 | 6.31 |
Zhou Ren | 3 | 605 | 28.92 |
Ning Xu | 4 | 184 | 20.03 |
Bohyung Han | 5 | 2203 | 94.45 |