Command Palette
Search for a command to run...
Deeply Interleaved Two-Stream Encoder for Referring Video Segmentation
Guang Feng Lihe Zhang Zhiwei Hu Huchuan Lu

Abstract
Referring video segmentation aims to segment the corresponding video object described by the language expression. To address this task, we first design a two-stream encoder to extract CNN-based visual features and transformer-based linguistic features hierarchically, and a vision-language mutual guidance (VLMG) module is inserted into the encoder multiple times to promote the hierarchical and progressive fusion of multi-modal features. Compared with the existing multi-modal fusion methods, this two-stream encoder takes into account the multi-granularity linguistic context, and realizes the deep interleaving between modalities with the help of VLGM. In order to promote the temporal alignment between frames, we further propose a language-guided multi-scale dynamic filtering (LMDF) module to strengthen the temporal coherence, which uses the language-guided spatial-temporal features to generate a set of position-specific dynamic filters to more flexibly and effectively update the feature of current frame. Extensive experiments on four datasets verify the effectiveness of the proposed model.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| referring-expression-segmentation-on-a2d | VLIDE | AP: 0.469 IoU mean: 0.598 IoU overall: 0.714 Precision@0.5: 0.702 Precision@0.6: 0.663 Precision@0.7: 0.585 Precision@0.8: 0.428 Precision@0.9: 0.151 |
| referring-expression-segmentation-on-j-hmdb | VLIDE | AP: 0.441 IoU mean: 0.666 IoU overall: 0.68 Precision@0.5: 0.874 Precision@0.6: 0.791 Precision@0.7: 0.586 Precision@0.8: 0.182 Precision@0.9: 0.30 |
| referring-expression-segmentation-on-refer-1 | VLIDE | F: 50.67 J: 48.44 Ju0026F: 49.56 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.