HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN

Siyuan Li Di Wu Fang Wu Zelin Zang Stan.Z.Li

Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN

Abstract

Masked image modeling, an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers. Its underlying idea is simple: a portion of the input image is masked out and then reconstructed via a pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this work, we observe that MIM essentially teaches the model to learn better middle-order interactions among patches for more generalized feature extraction. We then propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that A$^2$MIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks.

Code Repositories

Westlake-AI/openmixup
Official
pytorch
Mentioned in GitHub
Westlake-AI/A2MIM
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
instance-segmentation-on-cocoA2MIM (ViT-B)
mask AP: 43.5
instance-segmentation-on-cocoA2MIM (ResNet-50 2x)
mask AP: 34.9
object-detection-on-cocoA2MIM (ViT-B)
box mAP: 49.4
object-detection-on-cocoA2MIM (ResNet-50 2x)
box mAP: 39.8
self-supervised-image-classification-on-1A2MIM (ResNet-50 RSB-A2)
Top 1 Accuracy: 80.4%
self-supervised-image-classification-on-1A2MIM+ (ViT-B)
Top 1 Accuracy: 84.5%
self-supervised-image-classification-on-1A2MIM+ (ViT-S)
Top 1 Accuracy: 82.4%
self-supervised-image-classification-on-1A2MIM (ViT-B)
Top 1 Accuracy: 84.2%
self-supervised-image-classification-on-1A2MIM+ (ResNet-50 RSB-A3)
Top 1 Accuracy: 78.9%
self-supervised-image-classification-on-1A2MIM (ResNet-50 RSB-A3)
Top 1 Accuracy: 78.8%
self-supervised-image-classification-on-1A2MIM+ (ResNet-50 RSB-A2)
Top 1 Accuracy: 80.5%
self-supervised-image-classification-on-1A2MIM (ViT-S)
Top 1 Accuracy: 82.2%
semantic-segmentation-on-ade20kA2MIM (ResNet-50)
Validation mIoU: 38.3
semantic-segmentation-on-ade20kA2MIM (ViT-B)
Validation mIoU: 49

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN | Papers | HyperAI