Command Palette
Search for a command to run...
Xinbei Ma Zhuosheng Zhang Hai Zhao

Abstract
Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances. Most existing methods deal with dialogue contexts as plain texts and pay insufficient attention to the crucial speaker-aware clues. In this work, we propose an enhanced speaker-aware model with masking attention and heterogeneous graph networks to comprehensively capture discourse clues from both sides of speaker property and speaker-aware relationships. With such comprehensive speaker-aware modeling, experimental results show that our speaker-aware model helps achieves state-of-the-art performance on the benchmark dataset Molweni. Case analysis shows that our model enhances the connections between utterances and their own speakers and captures the speaker-aware discourse relations, which are critical for dialogue modeling.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| question-answering-on-friendsqa | Ma et al. - ELECTRA | EM: 58.7 F1: 75.4 |
| question-answering-on-molweni | Ma et al. - ELECTRA | EM: 58.6 F1: 72.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.