HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

Yufei Xu Qiming Zhang Jing Zhang Dacheng Tao

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

Abstract

Transformers have shown great potential in various computer vision tasks owing to their strong capability in modeling long-range dependency using the self-attention mechanism. Nevertheless, vision transformers treat an image as 1D sequence of visual tokens, lacking an intrinsic inductive bias (IB) in modeling local visual structures and dealing with scale variance. Alternatively, they require large-scale training data and longer training schedules to learn the IB implicitly. In this paper, we propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, ie, ViTAE. Technically, ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context by using multiple convolutions with different dilation rates. In this way, it acquires an intrinsic scale invariance IB and is able to learn robust feature representation for objects at various scales. Moreover, in each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network. Consequently, it has the intrinsic locality IB and is able to learn local features and global dependencies collaboratively. Experiments on ImageNet as well as downstream tasks prove the superiority of ViTAE over the baseline transformer and concurrent works. Source code and pretrained models will be available at GitHub.

Code Repositories

Annbless/ViTAE
Official
pytorch
Mentioned in GitHub
ViTAE-Transformer/ViTAE-Transformer
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetViTAE-T
GFLOPs: 3.0
Top 1 Accuracy: 75.3%
image-classification-on-imagenetViTAE-13M
GFLOPs: 6.8
Number of params: 13.2M
Top 1 Accuracy: 81%
image-classification-on-imagenetViTAE-T-Stage
GFLOPs: 4.6
Number of params: 4.8M
Top 1 Accuracy: 76.8%
image-classification-on-imagenetViTAE-6M
GFLOPs: 4
Number of params: 6.5M
Top 1 Accuracy: 77.9%
image-classification-on-imagenetViTAE-S-Stage
GFLOPs: 12.0
Number of params: 19.2M
Top 1 Accuracy: 82.2%
image-classification-on-imagenetViTAE-B-Stage
GFLOPs: 27.6
Number of params: 48.5M
Top 1 Accuracy: 83.6%
video-object-segmentation-on-davis-2016ViTAE-T-Stage
F-Score: 90.4
Ju0026F: 89.8
Jaccard (Mean): 89.2
video-object-segmentation-on-davis-2017ViTAE-T-Stage
F-Score: 85.5
Ju0026F: 82.5
Jaccard (Mean): 79.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias | Papers | HyperAI