HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers

Jihao Liu Xin Huang Jinliang Zheng Yu Liu Hongsheng Li

MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers

Abstract

In this paper, we propose Mixed and Masked AutoEncoder (MixMAE), a simple but efficient pretraining method that is applicable to various hierarchical Vision Transformers. Existing masked image modeling (MIM) methods for hierarchical Vision Transformers replace a random subset of input tokens with a special [MASK] symbol and aim at reconstructing original image tokens from the corrupted image. However, we find that using the [MASK] symbol greatly slows down the training and causes pretraining-finetuning inconsistency, due to the large masking ratio (e.g., 60% in SimMIM). On the other hand, MAE does not introduce [MASK] tokens at its encoder at all but is not applicable for hierarchical Vision Transformers. To solve the issue and accelerate the pretraining of hierarchical models, we replace the masked tokens of one image with visible tokens of another image, i.e., creating a mixed image. We then conduct dual reconstruction to reconstruct the two original images from the mixed input, which significantly improves efficiency. While MixMAE can be applied to various hierarchical Transformers, this paper explores using Swin Transformer with a large window size and scales up to huge model size (to reach 600M parameters). Empirical results demonstrate that MixMAE can learn high-quality visual representations efficiently. Notably, MixMAE with Swin-B/W14 achieves 85.1% top-1 accuracy on ImageNet-1K by pretraining for 600 epochs. Besides, its transfer performances on the other 6 datasets show that MixMAE has better FLOPs / performance tradeoff than previous popular MIM methods. Code is available at https://github.com/Sense-X/MixMIM.

Code Repositories

sense-x/mixmim
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetMixMIM-B
GFLOPs: 16.3
Number of params: 88M
Top 1 Accuracy: 85.1%
image-classification-on-inaturalist-2018MixMIM-L
Top-1 Accuracy: 80.3%
image-classification-on-inaturalist-2018MixMIM-B
Top-1 Accuracy: 77.5%
image-classification-on-inaturalist-2019MixMIM-L
Top-1 Accuracy: 83.9
image-classification-on-places205MixMIM-L
Top 1 Accuracy: 69.3
image-classification-on-places205MixMIM-B
Top 1 Accuracy: 68.3
image-classification-on-places365MixMIM-L(ViT-L)
Top 1 Accuracy: 60.3
image-classification-on-places365MixMIM-B (ViT)
Top 1 Accuracy: 58.9
object-detection-on-coco-2017MixMIM-B
mAP: 52.2
object-detection-on-coco-2017MixMIM-L
mAP: 54.1
semantic-segmentation-on-ade20k-valMixMIM-B
mIoU: 50.3
semantic-segmentation-on-ade20k-valMixMIM-L
mIoU: 53.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp