
摘要
许多自然语言处理的进展都基于更加丰富的模型,这些模型描述了输入与其出现上下文之间的交互方式。尽管递归神经网络取得了一定的成功,但它们仍然缺乏最终建模语言所需的泛化能力和系统性。在本研究中,我们提出了一种对经典的长短期记忆(LSTM)模型的扩展形式,即当前输入与前一输出之间的相互门控机制。这一机制使得输入与其上下文之间可以建模出更为丰富的交互空间。同样地,我们的模型也可以被视为使LSTM提供的转换函数依赖于上下文。实验结果表明,在Penn Treebank和Wikitext-2数据集上,该模型在语言建模任务中的泛化能力显著提高,困惑度降低了3-4个点;在四个基于字符的数据集上,比特每字符(bpc)指标降低了0.01-0.05。我们在所有数据集上均达到了新的最佳性能,除了Enwik8数据集,在该数据集上我们大幅缩小了LSTM与Transformer模型之间的差距。
代码仓库
microcoder-py/mogrifier-lstm
tf
GitHub 中提及
RMichaelSwan/MogrifierLSTM
pytorch
GitHub 中提及
deepmind/lamb
官方
tf
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| language-modelling-on-enwiki8 | LSTM | Bit per Character (BPC): 1.195 Number of params: 48M |
| language-modelling-on-enwiki8 | Mogrifier LSTM | Bit per Character (BPC): 1.146 Number of params: 48M |
| language-modelling-on-hutter-prize | Mogrifier LSTM | Bit per Character (BPC): 1.122 Number of params: 96M |
| language-modelling-on-hutter-prize | Mogrifier LSTM + dynamic eval | Bit per Character (BPC): 0.988 Number of params: 96M |
| language-modelling-on-penn-treebank-character | Mogrifier LSTM + dynamic eval | Bit per Character (BPC): 1.083 Number of params: 24M |
| language-modelling-on-penn-treebank-character | Mogrifier LSTM | Bit per Character (BPC): 1.120 Number of params: 24M |
| language-modelling-on-penn-treebank-word | Mogrifier LSTM + dynamic eval | Params: 24M Test perplexity: 44.9 Validation perplexity: 44.8 |
| language-modelling-on-wikitext-2 | Mogrifier LSTM | Number of params: 35M Test perplexity: 55.1 Validation perplexity: 57.3 |
| language-modelling-on-wikitext-2 | Mogrifier LSTM + dynamic eval | Number of params: 35M Test perplexity: 38.6 Validation perplexity: 40.2 |