HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Scaling Neural Machine Translation

Myle Ott; Sergey Edunov; David Grangier; Michael Auli

Scaling Neural Machine Translation

Abstract

Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. On WMT'14 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMT'14 English-French task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.

Code Repositories

pytorch/fairseq
Official
pytorch
babangain/translation
pytorch
Mentioned in GitHub
atreyasha/semantic-isometry-nmt
pytorch
Mentioned in GitHub
facebookresearch/fairseq
pytorch
Mentioned in GitHub
sfu-natlang/SFUTranslate
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
machine-translation-on-wmt2014-english-frenchTransformer Big
BLEU score: 43.2
Hardware Burden: 55G
Operations per network pass:
machine-translation-on-wmt2014-english-germanTransformer Big
BLEU score: 29.3
Hardware Burden: 9G
Number of Params: 210M
Operations per network pass:

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Scaling Neural Machine Translation | Papers | HyperAI