Command Palette
Search for a command to run...

Abstract
Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536$\times$1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. Code is available at \url{https://github.com/microsoft/Swin-Transformer}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-classification-on-kinetics-400 | Video-SwinV2-G (ImageNet-22k and external 70M pretrain) | Acc@1: 86.8 |
| image-classification-on-imagenet | SwinV2-G | Number of params: 3000M Top 1 Accuracy: 90.17% |
| image-classification-on-imagenet | SwinV2-B | Number of params: 88M Top 1 Accuracy: 87.1% |
| image-classification-on-imagenet-v2 | SwinV2-B | Top 1 Accuracy: 78.08 |
| image-classification-on-imagenet-v2 | SwinV2-G | Top 1 Accuracy: 84.00% |
| instance-segmentation-on-coco | SwinV2-G (HTC++) | mask AP: 54.4 |
| instance-segmentation-on-coco-minival | SwinV2-G (HTC++) | mask AP: 53.7 |
| object-detection-on-coco | SwinV2-G (HTC++) | Params (M): 3000 box mAP: 63.1 |
| object-detection-on-coco-minival | SwinV2-G (HTC++) | box AP: 62.5 |
| semantic-segmentation-on-ade20k | SwinV2-G-HTC++ Liu et al. ([2021a]) | Validation mIoU: 53.7 |
| semantic-segmentation-on-ade20k | SwinV2-G(UperNet) | Validation mIoU: 59.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.