Abstract | ||
---|---|---|
Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailing caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task. |
Year | DOI | Venue |
---|---|---|
2017 | 10.18653/v1/P17-1117 | PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1 |
DocType | Volume | Citations |
Conference | abs/1704.07489 | 21 |
PageRank | References | Authors |
0.75 | 28 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ramakanth Pasunuru | 1 | 25 | 3.69 |
Mohit Bansal | 2 | 871 | 63.19 |