| Semi-ViT (ViT-Huge) | 84.3% | 96.6% | Semi-supervised Vision Transformers at Scale | |
| SimCLRv2 self-distilled (ResNet-152 x3, SK) | 80.9% | 95.5% | Big Self-Supervised Models are Strong Semi-Supervised Learners | |
| SimCLRv2 (ResNet-152 x3, SK) | 80.1% | 95.0% | Big Self-Supervised Models are Strong Semi-Supervised Learners | |
| SimCLRv2 distilled (ResNet-50 x2, SK) | 80.2% | 95.0% | Big Self-Supervised Models are Strong Semi-Supervised Learners | |
| SimCLRv2 distilled (ResNet-50) | 77.5% | 93.4% | Big Self-Supervised Models are Strong Semi-Supervised Learners | |
| Meta Pseudo Labels (ResNet-50) | 73.89% | 91.38% | Meta Pseudo Labels | |
| S4L-MOAM (ResNet-50 4×) | 73.21% | 91.23% | S4L: Self-Supervised Semi-Supervised Learning | |
| Rotation + VAT + Ent. Min. | - | 91.23% | S4L: Self-Supervised Semi-Supervised Learning | |
| WCL (ResNet-50) | 72.0% | 91.2% | Weakly Supervised Contrastive Learning | |