Command Palette
Search for a command to run...
Li Yuan Qibin Hou Zihang Jiang Jiashi Feng Shuicheng Yan

Abstract
Visual recognition has been dominated by convolutional neural networks (CNNs) for years. Though recently the prevailing vision transformers (ViTs) have shown great potential of self-attention based models in ImageNet classification, their performance is still inferior to that of the latest SOTA CNNs if no extra data are provided. In this work, we try to close the performance gap and demonstrate that attention-based models are indeed able to outperform CNNs. We find a major factor limiting the performance of ViTs for ImageNet classification is their low efficacy in encoding fine-level features into the token representations. To resolve this, we introduce a novel outlook attention and present a simple and general architecture, termed Vision Outlooker (VOLO). Unlike self-attention that focuses on global dependency modeling at a coarse level, the outlook attention efficiently encodes finer-level features and contexts into tokens, which is shown to be critically beneficial to recognition performance but largely ignored by the self-attention. Experiments show that our VOLO achieves 87.1% top-1 accuracy on ImageNet-1K classification, which is the first model exceeding 87% accuracy on this competitive benchmark, without using any extra training data In addition, the pre-trained VOLO transfers well to downstream tasks, such as semantic segmentation. We achieve 84.3% mIoU score on the cityscapes validation set and 54.3% on the ADE20K validation set. Code is available at \url{https://github.com/sail-sg/volo}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-vizwiz | VOLO-D5 | Accuracy - All Images: 57.2 Accuracy - Clean Images: 59.7 Accuracy - Corrupted Images: 51.8 |
| image-classification-on-imagenet | VOLO-D5 | GFLOPs: 412 Number of params: 296M Top 1 Accuracy: 87.1% |
| image-classification-on-imagenet | VOLO-D2 | Number of params: 59M Top 1 Accuracy: 86% |
| image-classification-on-imagenet | VOLO-D1 | Number of params: 27M Top 1 Accuracy: 85.2% |
| image-classification-on-imagenet | VOLO-D3 | GFLOPs: 67.9 Number of params: 86M Top 1 Accuracy: 86.3% |
| image-classification-on-imagenet | VOLO-D4 | GFLOPs: 197 Number of params: 193M Top 1 Accuracy: 86.8% |
| image-classification-on-imagenet-real | VOLO-D5 | Accuracy: 90.6% |
| image-classification-on-imagenet-real | VOLO-D4 | Accuracy: 90.5% |
| image-classification-on-imagenet-v2 | VOLO-D4 | Top 1 Accuracy: 77.8 |
| image-classification-on-imagenet-v2 | VOLO-D5 | Top 1 Accuracy: 78 |
| image-classification-on-vizwiz-classification | VOLO-D5 | Accuracy: 57.2 |
| semantic-segmentation-on-ade20k | VOLO-D5 | Validation mIoU: 54.3 |
| semantic-segmentation-on-cityscapes-val | VOLO-D4 (MS, ImageNet1k pretrain) | mIoU: 84.3 |
| semantic-segmentation-on-graz-02 | VOLO-D5 | Pixel Accuracy: 85 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.