HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Rethinking Perturbations in Encoder-Decoders for Fast Training

Sho Takase Shun Kiyono

Rethinking Perturbations in Encoder-Decoders for Fast Training

Abstract

We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.

Code Repositories

takase/rethink_perturbations
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
machine-translation-on-iwslt2014-germanTransformer+Rep(Sim)+WDrop
BLEU score: 36.22
Number of Params: 37M
machine-translation-on-wmt2014-english-germanTransformer+Rep(Uni)
BLEU score: 33.89
Hardware Burden:
Operations per network pass:
SacreBLEU: 32.35
text-summarization-on-duc-2004-task-1Transformer+WDrop
ROUGE-1: 33.06
ROUGE-2: 11.45
ROUGE-L: 28.51
text-summarization-on-gigawordTransformer+Wdrop
ROUGE-1: 39.66
ROUGE-2: 20.45
ROUGE-L: 36.59
text-summarization-on-gigawordTransformer+Rep(Uni)
ROUGE-1: 39.81
ROUGE-2: 20.40
ROUGE-L: 36.93

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Rethinking Perturbations in Encoder-Decoders for Fast Training | Papers | HyperAI