Title
Language-Driven Temporal Activity Localization: A Semantic Matching Reinforcement Learning Model
Abstract
Current studies on action detection in untrimmed videos are mostly designed for action classes, where an action is described at word level such as jumping, tumbling, swing, etc. This paper focuses on a rarely investigated problem of localizing an activity via a sentence query which would be more challenging and practical. Considering that current methods are generally time-consuming due to the dense frame-processing manner, we propose a recurrent neural network based reinforcement learning model which selectively observes a sequence of frames and associates the given sentence with video content in a matching-based manner. However, directly matching sentences with video content performs poorly due to the large visual-semantic discrepancy. Thus, we extend the method to a semantic matching reinforcement learning (SM-RL) model by extracting semantic concepts of videos and then fusing them with global context features. Extensive experiments on three benchmark datasets, TACoS, Charades-STA and DiDeMo, show that our method achieves the state-of-the-art performance with a high detection speed, demonstrating both effectiveness and efficiency of our method.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00042
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
Field
DocType
Video Analytics,Vision + Language
Pattern recognition,Computer science,Natural language processing,Artificial intelligence,Reinforcement learning,Semantic matching
Conference
ISSN
ISBN
Citations 
1063-6919
978-1-7281-3294-5
11
PageRank 
References 
Authors
0.49
3
3
Name
Order
Citations
PageRank
Weining Wang111111.74
Yan Huang222627.65
Liang Wang34317243.28