HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Memory-efficient Stochastic methods for Memory-based Transformers

Vishwajit Kumar Vishnu; C. Chandra Sekhar

Memory-efficient Stochastic methods for Memory-based Transformers

Abstract

Training Memory-based transformers can require a large amount of memory and can be quite inefficient. We propose a novel two-phase training mechanism and a novel regularization technique to improve the training efficiency of memory-based transformers, which are often used for long-range context problems. For our experiments, we consider transformer-XL as our baseline model which is one of memorybased transformer models. We show that our resultant model, Skip Cross-head TransformerXL, outperforms the baseline on character level language modeling task with similar parameters and outperforms the baseline on word level language modelling task with almost 20% fewer parameters. Our proposed methods do not require any additional memory. We also demonstrate the effectiveness of our regularization mechanism on BERT which shows similar performance with reduction in standard deviation of scores of around 30% on multiple GLUE tasks.

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-enwiki8Skip Cross-Head Transformer-XL
Bit per Character (BPC): 1.033
Number of params: 41M
language-modelling-on-wikitext-103Skip Cross-Head Transformer-XL
Number of params: 122M
Test perplexity: 22.91
Validation perplexity: 21.87
paraphrase-identification-on-quora-question-1BERT + SCH attn
Val F1 Score: 88.436
paraphrase-identification-on-quora-question-1BERT + SCH attm
Val Accuracy: 91.422

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Memory-efficient Stochastic methods for Memory-based Transformers | Papers | HyperAI