HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A framework for benchmarking class-out-of-distribution detection and its application to ImageNet

Ido Galil Mohammed Dabbah Ran El-Yaniv

A framework for benchmarking class-out-of-distribution detection and its application to ImageNet

Abstract

When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/COOD_benchmarking. The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language--vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming 96% of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
classification-on-imagenet-c-ood-class-out-ofViT-L/32-384 with Max-logit
Detection AUROC (severity 0): 0.9958
Detection AUROC (severity 10): 0.7748
Detection AUROC (severity 5): 0.9632
classification-on-imagenet-c-ood-class-out-ofViT-L/32-384 with Softmax
Detection AUROC (severity 0): 0.9915
Detection AUROC (severity 10): 0.7
Detection AUROC (severity 5): 0.9293
classification-on-imagenet-c-ood-class-out-ofViT-L/32-384 with MC Dropout
Detection AUROC (severity 0): 0.9947
Detection AUROC (severity 10): 0.712
Detection AUROC (severity 5): 0.9478
classification-on-imagenet-c-ood-class-out-ofViT-L/32-384 with Entropy
Detection AUROC (severity 0): 0.9948
Detection AUROC (severity 10): 0.7332
Detection AUROC (severity 5): 0.9514
classification-on-imagenet-c-ood-class-out-ofViT-L/32-384 with ODIN
Detection AUROC (severity 0): 0.9955
Detection AUROC (severity 10): 0.7635
Detection AUROC (severity 5): 0.9589

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
A framework for benchmarking class-out-of-distribution detection and its application to ImageNet | Papers | HyperAI