HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning

Sungmin Cha Kyunghyun Cho Taesup Moon

Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning

Abstract

We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL). Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past. Specifically, for the InfoNCE-based contrastive learning methods, we define symmetric pseudo-negatives obtained from current and previous models and use them in both main and regularization loss terms. Furthermore, we extend this idea to non-contrastive learning methods which do not inherently rely on negatives. For these methods, a pseudo-negative is defined as the output from the previous model for a differently augmented version of the anchor sample and is asymmetrically applied to the regularization term. Extensive experimental results demonstrate that our PNR framework achieves state-of-the-art performance in representation learning during CSSL by effectively balancing the trade-off between plasticity and stability.

Code Repositories

csm9493/PNR
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenet-100-class-ilMoCo + CaSSLe
Top 1 Accuracy: 63.49

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning | Papers | HyperAI