HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Multi-view and multi-task training of RST discourse parsers

{Anders S{\o}gaard Barbara Plank Chlo{\'e} Braud}

Multi-view and multi-task training of RST discourse parsers

Abstract

We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15{%} error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers.

Benchmarks

BenchmarkMethodologyMetrics
discourse-parsing-on-rst-dtLSTM Sequential Discourse Parser (Braud et al., 2016)
RST-Parseval (Full): 47.5*
RST-Parseval (Nuclearity): 63.6*
RST-Parseval (Relation): 47.7*
RST-Parseval (Span): 79.7*

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Multi-view and multi-task training of RST discourse parsers | Papers | HyperAI