HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition

Qihao Zhao; Chen Jiang; Wei Hu; Fan Zhang; Jun Liu

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition

Abstract

Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts; (2) Lower model variance. However, the previous methods didn't handle them well. To this end, we propose More Diverse experts with Consistency Self-distillation (MDCS) to bridge the gap left by earlier methods. Our MDCS approach consists of two core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In detail, DL promotes diversity among experts by controlling their focus on different categories. To reduce the model variance, we employ KL divergence to distill the richer knowledge of weakly augmented instances for the experts' self-distillation. In particular, we design Confident Instance Sampling (CIS) to select the correctly classified instances for CS to avoid biased/noisy knowledge. In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and coupled: the diversity of experts benefits from the CS, and the CS cannot achieve remarkable results without the DL. Experiments show our MDCS outperforms the state-of-the-art by 1% $\sim$ 2% on five popular long-tailed benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS.

Code Repositories

fistyee/mdcs
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
long-tail-learning-on-cifar-10-lt-r-50MDCS
Error Rate: 11.7
long-tail-learning-on-cifar-100-lt-r-100MDCS
Error Rate: 43.9
long-tail-learning-on-cifar-100-lt-r-50MDCS
Error Rate: 39.9
long-tail-learning-on-imagenet-ltMDCS (ResNeXt-50)
Top-1 Accuracy: 61.8
long-tail-learning-on-inaturalist-2018MDCS(Resnet50)
Top-1 Accuracy: 75.6%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition | Papers | HyperAI