Title
Multilevel Language and Vision Integration for Text-to-Clip Retrieval.
Abstract
We address the problem of text-based activity retrieval in video. Given a sentence describing an activity, our task is to retrieve matching clips from an untrimmed video. To capture the inherent structures present in both text and video, we introduce a multilevel model that integrates vision and language features earlier and more tightly than prior work. First, we inject text features early on when generating clip proposals, to help eliminate unlikely clips and thus speed up processing and boost performance. Second, to learn a fine-grained similarity metric for retrieval, we use visual features to modulate the processing of query sentences at the word level in a recurrent neural network. A multi-task loss is also employed by adding query regeneration as an auxiliary task. Our approach significantly outperforms prior work on two challenging benchmarks: Charades-STA and ActivityNet Captions.
Year
Venue
Field
2019
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE
Computer science,Multilevel model,Recurrent neural network,Natural language processing,Artificial intelligence,Sentence,Machine learning,Speedup,CLIPS
DocType
Citations 
PageRank 
Conference
5
0.41
References 
Authors
34
6
Name
Order
Citations
PageRank
Huijuan Xu123912.33
Kun He2512.36
Bryan A. Plummer3768.15
Leonid Sigal42163124.33
Stan Sclaroff55631705.89
kate saenko64478202.48