HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks

Sunil Thulasidasan; Gopinath Chennupati; Jeff Bilmes; Tanmoy Bhattacharya; Sarah Michalak

On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks

Abstract

Mixup~\cite{zhang2017mixup} is a recently proposed method for training deep neural networks where additional samples are generated during training by convexly combining random pairs of images and their associated labels. While simple to implement, it has been shown to be a surprisingly effective method of data augmentation for image classification: DNNs trained with mixup show noticeable gains in classification performance on a number of image classification benchmarks. In this work, we discuss a hitherto untouched aspect of mixup training -- the calibration and predictive uncertainty of models trained with mixup. We find that DNNs trained with mixup are significantly better calibrated -- i.e., the predicted softmax scores are much better indicators of the actual likelihood of a correct prediction -- than DNNs trained in the regular fashion. We conduct experiments on a number of image classification architectures and datasets -- including large-scale datasets like ImageNet -- and find this to be the case. Additionally, we find that merely mixing features does not result in the same calibration benefit and that the label smoothing in mixup training plays a significant role in improving calibration. Finally, we also observe that mixup-trained DNNs are less prone to over-confident predictions on out-of-distribution and random-noise data. We conclude that the typical overconfidence seen in neural networks, even on in-distribution data is likely a consequence of training with hard labels, suggesting that mixup be employed for classification tasks where predictive uncertainty is a significant concern.

Code Repositories

MacroMayhem/OnMixup
pytorch
Mentioned in GitHub
paganpasta/onmixup
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
out-of-distribution-detection-on-stl-10Dropout(Imagenet)
Percentage correct: 78.93
out-of-distribution-detection-on-stl-10Baseline (Gaussian)
Percentage correct: 73.28
out-of-distribution-detection-on-stl-10Mixup (Gaussian)
Percentage correct: 95.93
out-of-distribution-detection-on-stl-10Baseline (Imagenet)
Percentage correct: 80.57
out-of-distribution-detection-on-stl-10Dropout(Gaussian)
Percentage correct: 70.57
out-of-distribution-detection-on-stl-10Mixup (Imagenet)
Percentage correct: 83.28

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks | Papers | HyperAI