Title
HiSA: Hierarchically Semantic Associating for Video Temporal Grounding
Abstract
Video Temporal Grounding (VTG) aims to locate the time interval in a video that is semantically relevant to a language query. Existing VTG methods interact the query with entangled video features and treat the instances in a dataset independently. However, intra-video entanglement and inter-video connection are rarely considered in these methods, leading to mismatches between the video and language. To this end, we propose a novel method, dubbed Hierarchically Semantic Associating (HiSA), which aims to precisely align the video with language and obtain discriminative representation for further location regression. Specifically, the action factors and background factors are disentangled from adjacent video segments, enforcing precise multimodal interaction and alleviating the intra-video entanglement. In addition, cross-guided contrast is elaborately framed to capture the inter-video connection, which benefits the multimodal understanding to locate the time interval. Extensive experiments on three benchmark datasets demonstrate that our approach significantly outperforms the state-of-the-art methods. The project page is available at: https://github.com/zhexu1997/HiSA.
Year
DOI
Venue
2022
10.1109/TIP.2022.3191841
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Grounding, Feature extraction, Proposals, Task analysis, Semantics, Representation learning, Image segmentation, Video temporal grounding, feature disentanglement, cross-guided contrast
Journal
31
Issue
ISSN
Citations 
1
1057-7149
0
PageRank 
References 
Authors
0.34
26
5
Name
Order
Citations
PageRank
Zhe Xu100.34
Da Chen200.34
Kun Wei3124.55
Cheng Deng4128385.48
Hui Xue500.34