HyperAIHyperAI

Command Palette

Search for a command to run...

Attention Is All You Need

Abstract

The dominant sequence transduction models are based on complex recurrent orconvolutional neural networks in an encoder-decoder configuration. The bestperforming models also connect the encoder and decoder through an attentionmechanism. We propose a new simple network architecture, the Transformer, basedsolely on attention mechanisms, dispensing with recurrence and convolutionsentirely. Experiments on two machine translation tasks show these models to besuperior in quality while being more parallelizable and requiring significantlyless time to train. Our model achieves 28.4 BLEU on the WMT 2014English-to-German translation task, improving over the existing best results,including ensembles by over 2 BLEU. On the WMT 2014 English-to-Frenchtranslation task, our model establishes a new single-model state-of-the-artBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fractionof the training costs of the best models from the literature. We show that theTransformer generalizes well to other tasks by applying it successfully toEnglish constituency parsing both with large and limited training data.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp