Machine Translation On Wmt2014 English French

评估指标

BLEU score

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
Transformer+BT (ADMIN init)46.4Very Deep Transformers for Neural Machine Translation
Noisy back-translation45.6Understanding Back-Translation at Scale
mRASP+Fine-Tune44.3Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information
Transformer + R-Drop43.95R-Drop: Regularized Dropout for Neural Networks
Admin43.8Understanding the Difficulty of Training Transformers
Transformer (ADMIN init)43.8Very Deep Transformers for Neural Machine Translation
BERT-fused NMT43.78Incorporating BERT into Neural Machine Translation
MUSE(Paralllel Multi-scale Attention)43.5MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
T543.4Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Local Joint Self-attention43.3Joint Source-Target Self Attention with Locality Constraints
Depth Growing43.27Depth Growing for Neural Machine Translation
DynamicConv43.2Pay Less Attention with Lightweight and Dynamic Convolutions
Transformer Big43.2Scaling Neural Machine Translation
TaLK Convolutions43.2Time-aware Large Kernel Convolutions
LightConv43.1Pay Less Attention with Lightweight and Dynamic Convolutions
FLOATER-large42.7Learning to Encode Position for Transformer with Continuous Dynamical Model
OmniNetP42.6OmniNet: Omnidirectional Representations from Transformers
T2R + Pretrain42.1Finetuning Pretrained Transformers into RNNs
Transformer Big + MoS42.1Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language Generation
Synthesizer (Random + Vanilla)41.85Synthesizer: Rethinking Self-Attention in Transformer Models
0 of 57 row(s) selected.
Machine Translation On Wmt2014 English French | SOTA | HyperAI超神经