Command Palette
Search for a command to run...
Siddhartha Brahma

Abstract
Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-penn-treebank-character | Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. | Bit per Character (BPC): 1.169 Number of params: 13.8M |
| language-modelling-on-penn-treebank-word | Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. | Params: 22M Test perplexity: 47.3 Validation perplexity: 48.0 |
| language-modelling-on-wikitext-2 | Past Decode Reg. + AWD-LSTM-MoS + dyn. eval. | Number of params: 35M Test perplexity: 40.3 Validation perplexity: 42.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.