HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Effective Sequence-to-Sequence Dialogue State Tracking

Jeffrey Zhao Mahdis Mahdieh Ye Zhang Yuan Cao Yonghui Wu

Effective Sequence-to-Sequence Dialogue State Tracking

Abstract

Sequence-to-sequence models have been applied to a wide variety of NLP tasks, but how to properly use them for dialogue state tracking has not been systematically investigated. In this paper, we study this problem from the perspectives of pre-training objectives as well as the formats of context representations. We demonstrate that the choice of pre-training objective makes a significant difference to the state tracking quality. In particular, we find that masked span prediction is more effective than auto-regressive language modeling. We also explore using Pegasus, a span prediction-based pre-training objective for text summarization, for the state tracking model. We found that pre-training for the seemingly distant summarization task works surprisingly well for dialogue state tracking. In addition, we found that while recurrent state context representation works also reasonably well, the model may have a hard time recovering from earlier mistakes. We conducted experiments on the MultiWOZ 2.1-2.4, WOZ 2.0, and DSTC2 datasets with consistent observations.

Code Repositories

smartyfh/MultiWOZ2.4
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
dialogue-state-tracking-on-second-dialogueT5 (span)
Joint: 73.6
dialogue-state-tracking-on-wizard-of-ozT5 (span)
Joint: 91

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Effective Sequence-to-Sequence Dialogue State Tracking | Papers | HyperAI