Command Palette
Search for a command to run...
Sho Takase Shun Kiyono

Abstract
We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| machine-translation-on-iwslt2014-german | Transformer+Rep(Sim)+WDrop | BLEU score: 36.22 Number of Params: 37M |
| machine-translation-on-wmt2014-english-german | Transformer+Rep(Uni) | BLEU score: 33.89 Hardware Burden: Operations per network pass: SacreBLEU: 32.35 |
| text-summarization-on-duc-2004-task-1 | Transformer+WDrop | ROUGE-1: 33.06 ROUGE-2: 11.45 ROUGE-L: 28.51 |
| text-summarization-on-gigaword | Transformer+Wdrop | ROUGE-1: 39.66 ROUGE-2: 20.45 ROUGE-L: 36.59 |
| text-summarization-on-gigaword | Transformer+Rep(Uni) | ROUGE-1: 39.81 ROUGE-2: 20.40 ROUGE-L: 36.93 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.