Command Palette
Search for a command to run...
IPCL: Iterative Pseudo-Supervised Contrastive Learning to Improve Self-Supervised Feature Representation
{Arijit Sur Sonal Kumar; Anirudh Phukan}
Abstract
Self-supervised learning with a contrastive batch approach has become a powerful tool for representation learning in computer vision. The performance of downstream tasks is proportional to the quality of visual features learned while self-supervised pre-training. The existing contrastive batch approaches heavily depend on data augmentation to learn latent information from unlabelled datasets. We argue that introducing the dataset’s intra-class variation in a contrastive batch approach improves visual representation quality further. In this paper, we propose a novel self-supervised learning approach named Iterative Pseudo-supervised Contrastive Learning (IPCL), which utilizes a balanced combination of image augmentations and pseudo-class information to improve the visual representation iteratively. Experimental results illustrate that our proposed method surpasses the baseline self-supervised method with the batch contrastive approach. It improves the visual representation quality over multiple datasets, leading to better performance on the downstream unsupervised image classification task.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| contrastive-learning-on-cifar-10 | IPCL (ResNet18) | Accuracy (Top-1): 84.77 |
| contrastive-learning-on-stl-10 | IPCL (ResNet18) | Accuracy (Top-1): 85.55 |
| unsupervised-image-classification-on-cifar-10 | IPCL (ResNet18) | Accuracy : 88.81 |
| unsupervised-image-classification-on-stl-10 | IPCL (ResNet18) | Accuracy : 80.91 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.