HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks

MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks

Abstract

The development of language models have moved from encoder-decoder to decoder-only designs. In addition, we observe that the two most popular multimodal tasks, the generative and contrastive tasks, are nontrivial to accommodate in one architecture, and further need adaptations for downstream tasks. We propose a novel paradigm of training with a decoder-only model for multimodal tasks, which is surprisingly effective in jointly learning of these disparate vision-language tasks. This is done with a simple model, called MaMMUT. It consists of a single vision encoder and a text decoder, and is able to accommodate contrastive and generative learning by a novel two-pass approach on the text decoder. We demonstrate that joint learning of these diverse objectives is simple, effective, and maximizes the weight-sharing of the model across these tasks. Furthermore, the same architecture enables straightforward extensions to open-vocabulary object detection and video-language tasks. The model tackles a diverse range of tasks, while being modest in capacity. Our model achieves the state of the art on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models. It shows very competitive results on VQA and Video Captioning, especially considering its capacity. Ablations confirm the flexibility and advantages of our approach.

Code Repositories

lucidrains/mammut-pytorch
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
cross-modal-retrieval-on-coco-2014MaMMUT (ours)
Image-to-text R@1: 70.7
Image-to-text R@10: 93.7
Image-to-text R@5: 89.1
image-retrieval-on-flickr30kMaMMUT (ours)
Image-to-text R@1: 94.9
Image-to-text R@10: 99.9
Image-to-text R@5: 99.5
Recall@1: 82.5
Recall@10: 98
Recall@5: 96
question-answering-on-coco-visual-questionMaMMUT (2B)
Test: 80.8
video-captioning-on-msr-vtt-1MaMMUT (ours)
CIDEr: 73.6
video-captioning-on-msvd-1MaMMUT
CIDEr: 195.6
visual-question-answering-on-coco-visual-5MaMMUT (2B)
Percentage correct: 80.7
visual-question-answering-on-msrvtt-qa-1MaMMUT
Accuracy: 0.495
visual-question-answering-on-msvd-qa-1MaMMUT (ours)
Accuracy: .602

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks | Papers | HyperAI