Title
End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding
Abstract
Natural language spatial video grounding aims to detect the relevant objects in video frames with descriptive sentences as the query. In spite of the great advances, most existing methods rely on dense video frame annotations, which require a tremendous amount of human effort. To achieve effective grounding under a limited annotation budget, we investigate one-shot video grounding, and learn to ground natural language in all video frames with solely one frame labeled, in an end-to-end manner. One major challenge of end-toend one-shot video grounding is the existence of videos frames that are either irrelevant to the language query or the labeled frames. Another challenge relates to the limited supervision, which might result in ineffective representation learning. To address these challenges, we designed an end-to-end model via Information Tree for One-Shot video grounding (IT-OS). Its key module, the information tree, can eliminate the interference of irrelevant frames based on branch search and branch cropping techniques. In addition, several selfsupervised tasks are proposed based on the information tree to improve the representation learning under insufficient labeling. Experiments on the benchmark dataset demonstrate the effectiveness of our model.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.596
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
12
Name
Order
Citations
PageRank
Mengze Li100.34
Tianbao Wang200.68
Haoyu Zhang300.68
Shengyu Zhang432942.48
Zhou Zhao577390.87
Jiaxu Miao601.69
Wenqiao Zhang700.68
Wenming Tan813.74
Jin Wang923.74
Peng Wang1019429.38
Shiliang Pu1118742.65
Fei Wu122209153.88