HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

Liangchen Luo; Jingjing Xu; Junyang Lin; Qi Zeng; Xu Sun

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

Abstract

Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. The code is available at https://github.com/lancopku/AMM

Code Repositories

lancopku/AMM
Official
tf

Benchmarks

BenchmarkMethodologyMetrics
text-generation-on-dailydialogAEM+Attention
BLEU-1: 14.17
BLEU-2: 5.69
BLEU-3: 3.78
BLEU-4: 2.84

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation | Papers | HyperAI