HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Large Batch Optimization for Deep Learning: Training BERT in 76 minutes

Yang You; Jing Li; Sashank Reddi; Jonathan Hseu; Sanjiv Kumar; Srinadh Bhojanapalli; Xiaodan Song; James Demmel; Kurt Keutzer; Cho-Jui Hsieh

Large Batch Optimization for Deep Learning: Training BERT in 76 minutes

Abstract

Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1). The LAMB implementation is available at https://github.com/tensorflow/addons/blob/master/tensorflow_addons/optimizers/lamb.py

Code Repositories

liuqiangict/lamb_optimizer
tf
Mentioned in GitHub
cybertronai/pytorch-lamb
pytorch
Mentioned in GitHub
hannibal046/pluglm
pytorch
Mentioned in GitHub
meltnur/speed
pytorch
Mentioned in GitHub
ymcui/LAMB_Optimizer_TF
tf
Mentioned in GitHub
frgfm/Holocron
pytorch
Mentioned in GitHub
Ankur3107/awesome-daily-blog
tf
Mentioned in GitHub
jxbz/fromage
pytorch
Mentioned in GitHub
skyday123/pytorch-lamb
pytorch
Mentioned in GitHub
ShadenSmith/ghpages-test
pytorch
Mentioned in GitHub
btahir/tensorflow-LAMB
tf
Mentioned in GitHub
Smerity/pytorch-lamb
pytorch
Mentioned in GitHub
huchen365/ds
pytorch
Mentioned in GitHub
jiaowoguanren0615/MambaVision
pytorch
Mentioned in GitHub
kaushaltrivedi/fast-bert
pytorch
Mentioned in GitHub
utterworks/fast-bert
pytorch
Mentioned in GitHub
zaradana/Fast_BERT
pytorch
Mentioned in GitHub
dimakarp1996/LAMB-keras
Mentioned in GitHub
huggingface/pytorch-image-models
pytorch
Mentioned in GitHub
fastalgo/imagenet_resnet50_lamb
tf
Mentioned in GitHub
zhuchen03/maxva
pytorch
Mentioned in GitHub
bojone/tiger
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-squad11-devBERT large (LAMB optimizer)
F1: 90.584

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes | Papers | HyperAI