Command Palette
Search for a command to run...
Xinzhe Li; Qianru Sun; Yaoyao Liu; Shibao Zheng; Qin Zhou; Tat-Seng Chua; Bernt Schiele

Abstract
Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art method. Code is at https://github.com/xinzheli1217/learning-to-self-train.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| few-shot-image-classification-on-mini-2 | LST | Accuracy: 70.1 |
| few-shot-image-classification-on-mini-3 | LST | Accuracy: 78.7 |
| few-shot-image-classification-on-tiered | LST | Accuracy: 77.7 |
| few-shot-image-classification-on-tiered-1 | LST | Accuracy: 85.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.