HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Classical Structured Prediction Losses for Sequence to Sequence Learning

Sergey Edunov; Myle Ott; Michael Auli; David Grangier; Marc'Aurelio Ranzato

Classical Structured Prediction Losses for Sequence to Sequence Learning

Abstract

There has been much recent work on training neural attention models at the sequence-level using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT'14 German-English translation as well as Gigaword abstractive summarization. On the larger WMT'14 English-French translation task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.

Code Repositories

pytorch/fairseq
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
machine-translation-on-iwslt2014-germanMinimum Risk Training [Edunov2017]
BLEU score: 32.84
machine-translation-on-iwslt2015-germanConvS2S+Risk
BLEU score: 32.93

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Classical Structured Prediction Losses for Sequence to Sequence Learning | Papers | HyperAI