Command Palette
Search for a command to run...
Comprehensive Multi-Modal Interactions for Referring Image Segmentation
Kanishk Jain Vineet Gandhi

Abstract
We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| referring-expression-segmentation-on-refcoco | SHNet | Overall IoU: 65.32 Precision@0.5: 75.18 Precision@0.6: 69.36 Precision@0.7: 61.21 Precision@0.8: 46.16 Precision@0.9: 16.23 |
| referring-expression-segmentation-on-refcoco-3 | SHNet | Overall IoU: 52.75 |
| referring-expression-segmentation-on-refcoco-4 | SHNet | Overall IoU: 58.46 |
| referring-expression-segmentation-on-refcoco-5 | SHNet | Overall IoU: 44.12 |
| referring-expression-segmentation-on-refcocog | SHNet | Overall IoU: 49.90 |
| referring-expression-segmentation-on-referit | SHNet | Overall IoU: 69.19 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.