Command Palette
Search for a command to run...
Jiawang Bai Li Yuan Shu-Tao Xia Shuicheng Yan Zhifeng Li Wei Liu

Abstract
The transformer models have shown promising effectiveness in dealing with various vision tasks. However, compared with training Convolutional Neural Network (CNN) models, training Vision Transformer (ViT) models is more difficult and relies on the large-scale training set. To explain this observation we make a hypothesis that \textit{ViT models are less effective in capturing the high-frequency components of images than CNN models}, and verify it by a frequency analysis. Inspired by this finding, we first investigate the effects of existing techniques for improving ViT models from a new frequency perspective, and find that the success of some techniques (e.g., RandAugment) can be attributed to the better usage of the high-frequency components. Then, to compensate for this insufficient ability of ViT models, we propose HAT, which directly augments high-frequency components of images via adversarial training. We show that HAT can consistently boost the performance of various ViT models (e.g., +1.2% for ViT-B, +0.5% for Swin-B), and especially enhance the advanced model VOLO-D5 to 87.3% that only uses ImageNet-1K data, and the superiority can also be maintained on out-of-distribution data and transferred to downstream tasks. The code is available at: https://github.com/jiawangbai/HAT.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-imagenet-c | VOLO-D5+HAT | Number of params: 296M mean Corruption Error (mCE): 38.4 |
| domain-generalization-on-imagenet-r | VOLO-D5+HAT | Top-1 Error Rate: 40.3 |
| domain-generalization-on-stylized-imagenet | VOLO-D5+HAT | Top 1 Accuracy: 25.9 |
| image-classification-on-imagenet | VOLO-D5+HAT | GFLOPs: 412 Number of params: 295.5M Top 1 Accuracy: 87.3% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.