
摘要
变压器模型在语言建模中具有学习长距离依赖关系的潜力,但受到固定长度上下文的限制。我们提出了一种新的神经架构——Transformer-XL,该架构能够在不破坏时间连贯性的情况下学习超出固定长度的依赖关系。它由段落级别的循环机制和一种新颖的位置编码方案组成。我们的方法不仅能够捕捉更长距离的依赖关系,还解决了上下文碎片化问题。因此,与RNN相比,Transformer-XL学习到的依赖关系长度增加了80%,而与普通的Transformer相比则增加了450%;在短序列和长序列上均表现出更好的性能,并且在评估过程中比普通Transformer快1800多倍。值得注意的是,我们在enwiki8数据集上的bpc(每字符位数)/困惑度达到了0.99的新纪录,在text8数据集上达到1.08,在WikiText-103数据集上达到18.3,在One Billion Word数据集上达到21.8,在Penn Treebank数据集上达到54.5(未进行微调)。仅在WikiText-103数据集上训练时,Transformer-XL就能够生成合理连贯、包含数千个标记的新文本文章。我们的代码、预训练模型和超参数在TensorFlow和PyTorch中均可获取。
代码仓库
sooftware/attentions
pytorch
GitHub 中提及
lvyufeng/bert4ms
mindspore
SambhawDrag/XLNet.jl
pytorch
GitHub 中提及
TimDettmers/transformer-xl
pytorch
GitHub 中提及
benkrause/dynamiceval-transformer
tf
GitHub 中提及
mustafaaljadery/gemma-2b-10m
pytorch
GitHub 中提及
wxt1997/Transformer-Transducer
pytorch
GitHub 中提及
okkteam/Transformer-Transducer
pytorch
GitHub 中提及
cmunnis/BERT_vs_Transformer-XL
pytorch
GitHub 中提及
Jmkernes/PAR-Transformer-XL
tf
GitHub 中提及
kimiyoung/transformer-xl
官方
pytorch
GitHub 中提及
aiha-lab/Attention-Head-Pruning
pytorch
GitHub 中提及
zhdbwe/Paper-DailyReading
tf
GitHub 中提及
sh951011/Attention-Implementation
pytorch
GitHub 中提及
listenviolet/XLNet
pytorch
GitHub 中提及
facebookresearch/code-prediction-transformer
pytorch
GitHub 中提及
google-research/meliad
jax
GitHub 中提及
huggingface/transformers
pytorch
GitHub 中提及
sooftware/conformer
pytorch
GitHub 中提及
inzva/fake-academic-paper-generation
pytorch
GitHub 中提及
samwisegamjeee/pytorch-transformers
pytorch
GitHub 中提及
cedrickchee/pytorch-pretrained-BERT
pytorch
GitHub 中提及
jincan333/lot
pytorch
sooftware/nlp-attentions
pytorch
GitHub 中提及
park-cheol/ASR-Conformer
pytorch
GitHub 中提及
sooftware/Attention-Implementation
pytorch
GitHub 中提及
shanghai-digital-brain-laboratory/bdm-db1
pytorch
GitHub 中提及
opendilab/DI-engine
pytorch
huggingface/xlnet
tf
GitHub 中提及
Machine-Learning-Tokyo/Poetry-GAN
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| language-modelling-on-enwiki8 | Transformer-XL (12 layers) | Bit per Character (BPC): 1.06 Number of params: 41M |
| language-modelling-on-enwiki8 | Transformer-XL (24 layers) | Bit per Character (BPC): 0.99 Number of params: 277M |
| language-modelling-on-enwiki8 | Transformer-XL (18 layers) | Bit per Character (BPC): 1.03 Number of params: 88M |
| language-modelling-on-hutter-prize | 18-layer Transformer-XL | Bit per Character (BPC): 1.03 Number of params: 88M |
| language-modelling-on-hutter-prize | 12-layer Transformer-XL | Bit per Character (BPC): 1.06 Number of params: 41M |
| language-modelling-on-hutter-prize | 24-layer Transformer-XL | Bit per Character (BPC): 0.99 Number of params: 277M |
| language-modelling-on-one-billion-word | Transformer-XL Large | Number of params: 0.8B PPL: 21.8 |
| language-modelling-on-one-billion-word | Transformer-XL Base | Number of params: 0.46B PPL: 23.5 |
| language-modelling-on-penn-treebank-word | Transformer-XL | Params: 24M Test perplexity: 54.55 Validation perplexity: 56.72 |
| language-modelling-on-text8 | Transformer-XL - 24 layers | Bit per Character (BPC): 1.08 Number of params: 277M |
| language-modelling-on-wikitext-103 | Transformer-XL Large | Number of params: 257M Test perplexity: 18.3 Validation perplexity: 18.2 |
| language-modelling-on-wikitext-103 | Transformer-XL Standard | Number of params: 151M Test perplexity: 24.0 Validation perplexity: 23.1 |