HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Contrastive Learning Inverts the Data Generating Process

Roland S. Zimmermann Yash Sharma Steffen Schneider Matthias Bethge Wieland Brendel

Contrastive Learning Inverts the Data Generating Process

Abstract

Contrastive learning has recently seen tremendous success in self-supervised learning. So far, however, it is largely unclear why the learned representations generalize so effectively to a large variety of downstream tasks. We here prove that feedforward models trained with objectives belonging to the commonly used InfoNCE family learn to implicitly invert the underlying generative model of the observed data. While the proofs make certain statistical assumptions about the generative model, we observe empirically that our findings hold even if these assumptions are severely violated. Our theory highlights a fundamental connection between contrastive learning, generative modeling, and nonlinear independent component analysis, thereby furthering our understanding of the learned representations as well as providing a theoretical foundation to derive more effective contrastive losses.

Code Repositories

brendel-group/cl-ica
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
disentanglement-on-3didentInfoNCE (Normal, Box)
MCC: 98.31
disentanglement-on-kitti-masksInfoNCE (Laplace, Box)
MCC: 80.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Contrastive Learning Inverts the Data Generating Process | Papers | HyperAI