Command Palette
Search for a command to run...
Vishwajit Kumar Vishnu; C. Chandra Sekhar

Abstract
Training Memory-based transformers can require a large amount of memory and can be quite inefficient. We propose a novel two-phase training mechanism and a novel regularization technique to improve the training efficiency of memory-based transformers, which are often used for long-range context problems. For our experiments, we consider transformer-XL as our baseline model which is one of memorybased transformer models. We show that our resultant model, Skip Cross-head TransformerXL, outperforms the baseline on character level language modeling task with similar parameters and outperforms the baseline on word level language modelling task with almost 20% fewer parameters. Our proposed methods do not require any additional memory. We also demonstrate the effectiveness of our regularization mechanism on BERT which shows similar performance with reduction in standard deviation of scores of around 30% on multiple GLUE tasks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-enwiki8 | Skip Cross-Head Transformer-XL | Bit per Character (BPC): 1.033 Number of params: 41M |
| language-modelling-on-wikitext-103 | Skip Cross-Head Transformer-XL | Number of params: 122M Test perplexity: 22.91 Validation perplexity: 21.87 |
| paraphrase-identification-on-quora-question-1 | BERT + SCH attn | Val F1 Score: 88.436 |
| paraphrase-identification-on-quora-question-1 | BERT + SCH attm | Val Accuracy: 91.422 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.