HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Pretraining the Noisy Channel Model for Task-Oriented Dialogue

Qi Liu; Lei Yu; Laura Rimell; Phil Blunsom

Pretraining the Noisy Channel Model for Task-Oriented Dialogue

Abstract

Direct decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes' theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instantiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two stage pretraining strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.

Benchmarks

BenchmarkMethodologyMetrics
end-to-end-dialogue-modelling-on-multiwoz-2-0Noisy Channel Model
BLEU: 20.6
MultiWOZ (Inform): 86.9
MultiWOZ (Success): 76.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Pretraining the Noisy Channel Model for Task-Oriented Dialogue | Papers | HyperAI