HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Sharpness-Aware Minimization for Efficiently Improving Generalization

Pierre Foret Ariel Kleiner Hossein Mobahi Behnam Neyshabur

Sharpness-Aware Minimization for Efficiently Improving Generalization

Abstract

In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-10, CIFAR-100, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at \url{https://github.com/google-research/sam}.

Code Repositories

Jannoshh/simple-sam
tf
Mentioned in GitHub
denizyuret/playground
pytorch
Mentioned in GitHub
rollovd/LookSAM
pytorch
Mentioned in GitHub
moskomule/sam.pytorch
pytorch
Mentioned in GitHub
Janus-Shiau/SAM-tf2
tf
Mentioned in GitHub
simon20010923/DDAMFN
pytorch
Mentioned in GitHub
Yuheon/Sharp-Aware-Minimization
pytorch
Mentioned in GitHub
mhassann22/GCSAM
pytorch
Mentioned in GitHub
NiMlr/pynlqn
Mentioned in GitHub
davda54/sam
pytorch
Mentioned in GitHub
borealisai/perturbed-forgetting
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
fine-grained-image-classification-on-birdsnapEffNet-L2 (SAM)
Accuracy: 90.07%
fine-grained-image-classification-on-fgvcEffNet-L2 (SAM)
Top-1 Error Rate: 4.82
fine-grained-image-classification-on-food-101EffNet-L2 (SAM)
Accuracy: 96.18
fine-grained-image-classification-on-oxford-2EffNet-L2 (SAM)
Accuracy: 97.10
Top-1 Error Rate: 2.90%
fine-grained-image-classification-on-stanfordEffNet-L2 (SAM)
Accuracy: 95.96%
image-classification-on-cifar-100PyramidNet (SAM)
Percentage correct: 89.7
image-classification-on-cifar-100CNN39
Percentage correct: 42.64
image-classification-on-cifar-100EffNet-L2 (SAM)
Percentage correct: 96.08
image-classification-on-cifar-100CNN36
Percentage correct: 36.07
image-classification-on-flowers-102EffNet-L2 (SAM)
Accuracy: 99.65%
image-classification-on-imagenetEfficientNet-L2-475 (SAM)
Number of params: 480M
Top 1 Accuracy: 88.61%
image-classification-on-imagenetResNet-152 (SAM)
Top 1 Accuracy: 81.6%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Sharpness-Aware Minimization for Efficiently Improving Generalization | Papers | HyperAI