HiSA: Hierarchically semantic associating for video temporal grounding

Zhe Xu, Da Chen, Kun Wei, Cheng Deng, Hui Xue

Research output: Contribution to journalArticlepeer-review

27 Citations (SciVal)

Abstract

Video Temporal Grounding (VTG) aims to locate the time interval in a video that is semantically relevant to a language query. Existing VTG methods interact the query with entangled video features and treat the instances in a dataset independently. However, intra-video entanglement and inter-video connection are rarely considered in these methods, leading to mismatches between the video and language. To this end, we propose a novel method, dubbed Hierarchically Semantic Associating (HiSA), which aims to precisely align the video with language and obtain discriminative representation for further location regression. Specifically, the action factors and background factors are disentangled from adjacent video segments, enforcing precise multimodal interaction and alleviating the intra-video entanglement. In addition, cross-guided contrast is elaborately framed to capture the inter-video connection, which benefits the multimodal understanding to locate the time interval. Extensive experiments on three benchmark datasets demonstrate that our approach significantly outperforms the state-of-the-art methods. The project page is available at: https://github.com/zhexu1997/HiSA .
Original languageEnglish
Pages (from-to)5178 - 5188
Number of pages11
JournalIEEE Transactions on Image Processing
Volume31
DOIs
Publication statusPublished - 1 Aug 2022

Fingerprint

Dive into the research topics of 'HiSA: Hierarchically semantic associating for video temporal grounding'. Together they form a unique fingerprint.

Cite this