Machine Translation On Wmt2016 German English
评估指标
BLEU score
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| FLAN 137B (few-shot, k=11) | 40.7 | Finetuned Language Models Are Zero-Shot Learners | |
| FLAN 137B (zero-shot) | 38.9 | Finetuned Language Models Are Zero-Shot Learners | |
| Attentional encoder-decoder + BPE | 38.6 | Edinburgh Neural Machine Translation Systems for WMT 16 | |
| Linguistic Input Features | 32.9 | Linguistic Input Features Improve Neural Machine Translation | |
| SMT + iterative backtranslation (unsupervised) | 23.05 | Unsupervised Statistical Machine Translation | |
| Unsupervised NMT + weight-sharing | 14.62 | Unsupervised Neural Machine Translation with Weight Sharing | |
| Unsupervised S2S with attention | 13.33 | Unsupervised Machine Translation Using Monolingual Corpora Only | |
| Exploiting Mono at Scale (single) | - | Exploiting Monolingual Data at Scale for Neural Machine Translation | - |
0 of 8 row(s) selected.