HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition

Yifan Zhang Bryan Hooi Lanqing Hong Jiashi Feng

Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition

Abstract

Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. Code is available at \url{https://github.com/Vanint/SADE-AgnosticLT}.

Code Repositories

Vanint/TADE-AgnosticLT
Official
pytorch
Mentioned in GitHub
vanint/sade-agnosticlt
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-inaturalist-2018TADE (ResNet-50)
Top-1 Accuracy: 72.9%
long-tail-learning-on-cifar-10-lt-r-10TADE
Error Rate: 9.2
long-tail-learning-on-cifar-10-lt-r-10RIDE
Error Rate: 10.3
long-tail-learning-on-cifar-10-lt-r-100TADE
Error Rate: 16.2
long-tail-learning-on-cifar-100-lt-r-10TADE
Error Rate: 36.4
long-tail-learning-on-cifar-100-lt-r-100TADE
Error Rate: 50.2
long-tail-learning-on-cifar-100-lt-r-50TADE
Error Rate: 46.1
long-tail-learning-on-imagenet-ltTADE(ResNeXt101-32x4d)
Top-1 Accuracy: 61.4
long-tail-learning-on-imagenet-ltTADE(ResNeXt-50)
Top-1 Accuracy: 58.8
long-tail-learning-on-inaturalist-2018TADE
Top-1 Accuracy: 72.9%
long-tail-learning-on-inaturalist-2018TADE(ResNet-152)
Top-1 Accuracy: 77%
long-tail-learning-on-places-ltTADE
Top 1 Accuracy: 40.9
Top-1 Accuracy: 41.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition | Papers | HyperAI