HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Robust Training under Label Noise by Over-parameterization

Sheng Liu Zhihui Zhu Qing Qu Chong You

Robust Training under Label Noise by Over-parameterization

Abstract

Recently, over-parameterized deep networks, with increasingly more network parameters than training samples, have dominated the performances of modern machine learning. However, when the training data is corrupted, it has been well-known that over-parameterized networks tend to overfit and do not generalize. In this work, we propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted. The main idea is yet very simple: label noise is sparse and incoherent with the network learned from clean data, so we model the noise and learn to separate it from the data. Specifically, we model the label noise via another sparse over-parameterization term, and exploit implicit algorithmic regularizations to recover and separate the underlying corruptions. Remarkably, when trained using such a simple method in practice, we demonstrate state-of-the-art test accuracy against label noise on a variety of real datasets. Furthermore, our experimental results are corroborated by theory on simplified linear models, showing that exact separation between sparse noise and low-rank data can be achieved under incoherent conditions. The work opens many interesting directions for improving over-parameterized models by using sparse over-parameterization and implicit regularization.

Code Repositories

shengliu66/sop
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
learning-with-noisy-labels-on-cifar-100nSOP+
Accuracy (mean): 67.81
learning-with-noisy-labels-on-cifar-10nSOP+
Accuracy (mean): 95.61
learning-with-noisy-labels-on-cifar-10n-1SOP+
Accuracy (mean): 95.28
learning-with-noisy-labels-on-cifar-10n-2SOP
Accuracy (mean): 95.31
learning-with-noisy-labels-on-cifar-10n-3SOP+
Accuracy (mean): 95.39
learning-with-noisy-labels-on-cifar-10n-worstSOP+
Accuracy (mean): 93.24

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Robust Training under Label Noise by Over-parameterization | Papers | HyperAI