Command Palette
Search for a command to run...
LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching
Qun Liu Matthew Shreve Raja Bala

Abstract
Although data is abundant, data labeling is expensive. Semi-supervised learning methods combine a few labeled samples with a large corpus of unlabeled data to effectively train models. This paper introduces our proposed method LiDAM, a semi-supervised learning approach rooted in both domain adaptation and self-paced learning. LiDAM first performs localized domain shifts to extract better domain-invariant features for the model that results in more accurate clusters and pseudo-labels. These pseudo-labels are then aligned with real class labels in a self-paced fashion using a novel iterative matching technique that is based on majority consistency over high-confidence predictions. Simultaneously, a final classifier is trained to predict ground-truth labels until convergence. LiDAM achieves state-of-the-art performance on the CIFAR-100 dataset, outperforming FixMatch (73.50% vs. 71.82%) when using 2500 labels.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semi-supervised-image-classification-on-cifar | LiDAM | Percentage error: 7.48 |
| semi-supervised-image-classification-on-cifar-11 | LiDAM | Accuracy: 89.04 |
| semi-supervised-image-classification-on-cifar-2 | LiDAM | Percentage error: 23.22 |
| semi-supervised-image-classification-on-cifar-24 | LiDAM | Accuracy (%): 75.14 |
| semi-supervised-image-classification-on-cifar-4 | LiDAM | Percentage correct: 75.14 |
| semi-supervised-image-classification-on-cifar-6 | LiDAM | Percentage error: 19.17 |
| semi-supervised-image-classification-on-cifar-9 | LiDAM | Percentage error: 26.50 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.