Command Palette
Search for a command to run...
Kuilin Chen; Chi-Guhn Lee

Abstract
Learning a new task from a handful of examples remains an open challenge in machine learning. Despite the recent progress in few-shot learning, most methods rely on supervised pretraining or meta-learning on labeled meta-training data and cannot be applied to the case where the pretraining data is unlabeled. In this study, we present an unsupervised few-shot learning method via deep Laplacian eigenmaps. Our method learns representation from unlabeled data by grouping similar samples together and can be intuitively interpreted by random walks on augmented training data. We analytically show how deep Laplacian eigenmaps avoid collapsed representation in unsupervised learning without explicit comparison between positive and negative samples. The proposed method significantly closes the performance gap between supervised and unsupervised few-shot learning. Our method also achieves comparable performance to current state-of-the-art self-supervised learning methods under linear evaluation protocol.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| unsupervised-few-shot-image-classification-on | Deep Laplacian Eigenmaps | Accuracy: 59.47 |
| unsupervised-few-shot-image-classification-on-1 | Deep Laplacian Eigenmaps | Accuracy: 78.79 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.