HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MultiMAE: Multi-modal Multi-task Masked Autoencoders

Roman Bachmann David Mizrahi Andrei Atanov Amir Zamir

MultiMAE: Multi-modal Multi-task Masked Autoencoders

Abstract

We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders (MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can optionally accept additional modalities of information in the input besides the RGB image (hence "multi-modal"), and II) its training objective accordingly includes predicting multiple outputs besides the RGB image (hence "multi-task"). We make use of masking (across image patches and input modalities) to make training MultiMAE tractable as well as to ensure cross-modality predictive coding is indeed learned by the network. We show this pre-training strategy leads to a flexible, simple, and efficient framework with improved transfer results to downstream tasks. In particular, the same exact pre-trained network can be flexibly used when additional information besides RGB images is available or when no information other than RGB is available - in all configurations yielding competitive to or significantly better results than the baselines. To avoid needing training datasets with multiple modalities and tasks, we train MultiMAE entirely using pseudo labeling, which makes the framework widely applicable to any RGB dataset. The experiments are performed on multiple transfer tasks (image classification, semantic segmentation, depth estimation) and datasets (ImageNet, ADE20K, Taskonomy, Hypersim, NYUv2). The results show an intriguingly impressive capability by the model in cross-modal/task predictive coding and transfer.

Code Repositories

EPFL-VILAB/MultiMAE
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
semantic-segmentation-on-ade20kMultiMAE (ViT-B)
Validation mIoU: 46.2
semantic-segmentation-on-ade20k-valMultiMAE (ViT-B)
mIoU: 46.2
semantic-segmentation-on-hypersimDINO (ViT-B)
mIoU: 32.5
semantic-segmentation-on-hypersimMoCo-v3 (ViT-B)
mIoU: 31.7
semantic-segmentation-on-hypersimMultiMAE (ViT-B)
mIoU: 37.0
semantic-segmentation-on-hypersimMAE (ViT-B)
mIoU: 36.5
semantic-segmentation-on-nyu-depth-v2MultiMAE (ViT-B)
Mean IoU: 56.0%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MultiMAE: Multi-modal Multi-task Masked Autoencoders | Papers | HyperAI