Command Palette
Search for a command to run...
Semantic Role Aware Correlation Transformer for Text to Video Retrieval
Burak Satar Hongyuan Zhu Xavier Bresson Joo Hwee Lim

Abstract
With the emergence of social media, voluminous video clips are uploaded every day, and retrieving the most relevant visual content with a language query becomes critical. Most approaches aim to learn a joint embedding space for plain textual and visual contents without adequately exploiting their intra-modality structures and inter-modality correlations. This paper proposes a novel transformer that explicitly disentangles the text and video into semantic roles of objects, spatial contexts and temporal contexts with an attention scheme to learn the intra- and inter-role correlations among the three roles to discover discriminative features for matching at different levels. The preliminary results on popular YouCook2 indicate that our approach surpasses a current state-of-the-art method, with a high margin in all metrics. It also overpasses two SOTA methods in terms of two metrics.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-retrieval-on-youcook2 | Satar et al. | text-to-video Median Rank: 77 text-to-video R@1: 5.3 text-to-video R@10: 20.8 text-to-video R@5: 14.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.