HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Three things everyone should know about Vision Transformers

Hugo Touvron Matthieu Cord Alaaeldin El-Nouby Jakob Verbeek Hervé Jégou

Three things everyone should know about Vision Transformers

Abstract

After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.

Benchmarks

BenchmarkMethodologyMetrics
fine-grained-image-classification-on-stanfordViT-L (attn finetune)
Accuracy: 93.8%
image-classification-on-cifar-10ViT-B (attn fine-tune)
Percentage correct: 99.3
image-classification-on-cifar-100ViT-L (attn fine-tune)
Percentage correct: 93.0
image-classification-on-flowers-102ViT-B (attn finetune)
Accuracy: 98.5
image-classification-on-imagenetViT-B (hMLP + BeiT)
Top 1 Accuracy: 83.4%
image-classification-on-imagenetViT-L@384 (attn finetune)
Top 1 Accuracy: 85.5%
image-classification-on-imagenetViT-B-18x2
Top 1 Accuracy: 84.1%
image-classification-on-imagenetViT-B-36x1
Top 1 Accuracy: 84.1%
image-classification-on-imagenetViT-S-24x2
Top 1 Accuracy: 82.6%
image-classification-on-imagenetViT-B@384 (attn finetune)
Top 1 Accuracy: 84.3%
image-classification-on-imagenetViT-S-48x1
Top 1 Accuracy: 82.3%
image-classification-on-imagenet-v2ViT-B-36x1
Top 1 Accuracy: 73.9
image-classification-on-inaturalist-2018ViT-L (attn finetune)
Top-1 Accuracy: 75.3%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Three things everyone should know about Vision Transformers | Papers | HyperAI