Command Palette
Search for a command to run...
Jiang-Xin Shi; Tong Wei; Zhi Zhou; Jie-Jing Shao; Xin-Yan Han; Yu-Feng Li

Abstract
The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| long-tail-learning-on-cifar-100-lt-r-10 | LIFT (ViT-B/16, ImageNet-21K pre-training) | Error Rate: 8.7 |
| long-tail-learning-on-cifar-100-lt-r-10 | LIFT (ViT-B/16, CLIP) | Error Rate: 15.1 |
| long-tail-learning-on-cifar-100-lt-r-100 | LIFT (ViT-B/16, ImageNet-21K pre-training) | Error Rate: 10.9 |
| long-tail-learning-on-cifar-100-lt-r-100 | LIFT (ViT-B/16, CLIP) | Error Rate: 18.3 |
| long-tail-learning-on-cifar-100-lt-r-50 | LIFT (ViT-B/16, CLIP) | Error Rate: 16.9 |
| long-tail-learning-on-cifar-100-lt-r-50 | LIFT (ViT-B/16, ImageNet-21K pre-training) | Error Rate: 9.8 |
| long-tail-learning-on-imagenet-lt | LIFT (ViT-B/16) | Top-1 Accuracy: 78.3 |
| long-tail-learning-on-imagenet-lt | LIFT (ViT-L/14) | Top-1 Accuracy: 82.9 |
| long-tail-learning-on-inaturalist-2018 | LIFT (ViT-B/16) | Top-1 Accuracy: 80.4% |
| long-tail-learning-on-inaturalist-2018 | LIFT (ViT-L/14) | Top-1 Accuracy: 85.2% |
| long-tail-learning-on-inaturalist-2018 | LIFT (ViT-L/14@336px) | Top-1 Accuracy: 87.4% |
| long-tail-learning-on-places-lt | LIFT (ViT-L/14) | Top-1 Accuracy: 53.7 |
| long-tail-learning-on-places-lt | LIFT (ViT-B/16) | Top-1 Accuracy: 52.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.