Command Palette
Search for a command to run...
Guneet S. Dhillon Pratik Chaudhari Avinash Ravichandran Stefano Soatto

Abstract
Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| few-shot-image-classification-on-dirichlet | Entropy Minimization | 1:1 Accuracy: 58.5 |
| few-shot-image-classification-on-dirichlet-1 | Entropy Minimization | 1:1 Accuracy: 74.8 |
| few-shot-image-classification-on-dirichlet-2 | Entropy Minimization | 1:1 Accuracy: 61.2 |
| few-shot-image-classification-on-dirichlet-3 | Entropy Minimization | 1:1 Accuracy: 75.5 |
| few-shot-image-classification-on-dirichlet-4 | Entropy Minimization | 1:1 Accuracy: 67.5 |
| few-shot-image-classification-on-dirichlet-5 | Entropy Minimization | 1:1 Accuracy: 82.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.