HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

End-to-End Referring Video Object Segmentation with Multimodal Transformers

Adam Botach Evgenii Zheltonozhskii Chaim Baskin

End-to-End Referring Video Object Segmentation with Multimodal Transformers

Abstract

The referring video object segmentation task (RVOS) involves segmentation of a text-referred object instance in the frames of a given video. Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it. In this paper, we propose a simple Transformer-based approach to RVOS. Our framework, termed Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence prediction problem. Following recent advancements in computer vision and natural language processing, MTTR is based on the realization that video and text can be processed together effectively and elegantly by a single multimodal Transformer model. MTTR is end-to-end trainable, free of text-related inductive bias components and requires no additional mask-refinement post-processing steps. As such, it simplifies the RVOS pipeline considerably compared to existing methods. Evaluation on standard benchmarks reveals that MTTR significantly outperforms previous art across multiple metrics. In particular, MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and JHMDB-Sentences datasets respectively, while processing 76 frames per second. In addition, we report strong results on the public validation set of Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the attention of researchers. The code to reproduce our experiments is available at https://github.com/mttr2021/MTTR

Code Repositories

mttr2021/MTTR
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
referring-expression-segmentation-on-a2dMTTR (w=8)
AP: 0.447
IoU mean: 0.618
IoU overall: 0.702
Precision@0.5: 0.721
Precision@0.6: 0.684
Precision@0.7: 0.607
Precision@0.8: 0.456
Precision@0.9: 0.164
referring-expression-segmentation-on-a2dMTTR (w=10)
AP: 0.461
IoU mean: 0.64
IoU overall: 0.72
Precision@0.5: 0.754
Precision@0.6: 0.712
Precision@0.7: 0.638
Precision@0.8: 0.485
Precision@0.9: 0.169
referring-expression-segmentation-on-j-hmdbMTTR (w=10)
AP: 0.392
IoU mean: 0.698
IoU overall: 0.701
Precision@0.5: 0.939
Precision@0.6: 0.852
Precision@0.7: 0.616
Precision@0.8: 0.166
Precision@0.9: 0.001
referring-expression-segmentation-on-j-hmdbMTTR (w=8)
AP: 0.366
IoU mean: 0.679
IoU overall: 0.674
Precision@0.5: 0.91
Precision@0.6: 0.815
Precision@0.7: 0.57
Precision@0.8: 0.144
Precision@0.9: 0.001
referring-expression-segmentation-on-refer-1MTTR (w=12)
F: 56.64
J: 54.00
Ju0026F: 55.32
referring-video-object-segmentation-on-mevisMTTR
F: 31.2
J: 28.8
Ju0026F: 30.0
referring-video-object-segmentation-on-revosMTTR (Video-Swin-T)
F: 25.9
J: 25.1
Ju0026F: 25.5
R: 5.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
End-to-End Referring Video Object Segmentation with Multimodal Transformers | Papers | HyperAI