Command Palette
Search for a command to run...
Ben Krause; Emmanuel Kahembwe; Iain Murray; Steve Renals

Abstract
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-hutter-prize | mLSTM + dynamic eval | Bit per Character (BPC): 1.08 Number of params: 46M |
| language-modelling-on-penn-treebank-word | AWD-LSTM + dynamic eval | Params: 24M Test perplexity: 51.1 Validation perplexity: 51.6 |
| language-modelling-on-text8 | mLSTM + dynamic eval | Bit per Character (BPC): 1.19 Number of params: 45M |
| language-modelling-on-wikitext-2 | AWD-LSTM + dynamic eval | Number of params: 33M Test perplexity: 44.3 Validation perplexity: 46.4 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.