HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-Supervised Learning of Pretext-Invariant Representations

Ishan Misra Laurens van der Maaten

Self-Supervised Learning of Pretext-Invariant Representations

Abstract

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as "pearl") that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised learning of image representations with good invariance properties.

Code Repositories

aniket03/pirl_pytorch
pytorch
Mentioned in GitHub
danielgordon10/vince
pytorch
Mentioned in GitHub
facebookresearch/vissl
pytorch
Mentioned in GitHub
kawshik8/DL-project
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
contrastive-learning-on-imagenet-1kResNet50
ImageNet Top-1 Accuracy: 63.6
self-supervised-image-classification-onPIRL
Number of Params: 24M
Top 1 Accuracy: 63.6%
semi-supervised-image-classification-on-2PIRL (ResNet-50)
Top 5 Accuracy: 83.8%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Self-Supervised Learning of Pretext-Invariant Representations | Papers | HyperAI