HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Can we obtain significant success in RST discourse parsing by using Large Language Models?

Aru Maekawa; Tsutomu Hirao; Hidetaka Kamigaito; Manabu Okumura

Can we obtain significant success in RST discourse parsing by using Large Language Models?

Abstract

Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks. While encoder-only or encoder-decoder pre-trained language models have already proved to be effective in discourse parsing, the extent to which LLMs can perform this task remains an open research question. Therefore, this paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing process for both fundamental top-down and bottom-up strategies is converted into prompts, which LLMs can work with. We employ Llama 2 and fine-tune it with QLoRA, which has fewer parameters that can be tuned. Experimental results on three benchmark datasets, RST-DT, Instr-DT, and the GUM corpus, demonstrate that Llama 2 with 70 billion parameters in the bottom-up strategy obtained state-of-the-art (SOTA) results with significant differences. Furthermore, our parsers demonstrated generalizability when evaluated on RST-DT, showing that, in spite of being trained with the GUM corpus, it obtained similar performances to those of existing parsers trained with RST-DT.

Code Repositories

nttcslab-nlp/rstparser_eacl24
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
discourse-parsing-on-rst-dtTop-down Llama 2 (13B)
Standard Parseval (Full): 55.6
Standard Parseval (Nuclearity): 67.9
Standard Parseval (Relation): 57.7
Standard Parseval (Span): 78.6
discourse-parsing-on-rst-dtBottom-up Llama 2 (70B)
Standard Parseval (Full): 58.1
Standard Parseval (Nuclearity): 70.4
Standard Parseval (Relation): 60.0
Standard Parseval (Span): 79.8
discourse-parsing-on-rst-dtBottom-up Llama 2 (13B)
Standard Parseval (Full): 56.0
Standard Parseval (Nuclearity): 68.1
Standard Parseval (Relation): 57.8
Standard Parseval (Span): 78.3
discourse-parsing-on-rst-dtTop-down Llama 2 (7B)
Standard Parseval (Full): 53.4
Standard Parseval (Nuclearity): 65.4
Standard Parseval (Relation): 55.2
Standard Parseval (Span): 76.3
discourse-parsing-on-rst-dtTop-down Llama 2 (70B)
Standard Parseval (Full): 56.0
Standard Parseval (Nuclearity): 68.7
Standard Parseval (Relation): 57.7
Standard Parseval (Span): 78.8
discourse-parsing-on-rst-dtBottom-up Llama 2 (7B)
Standard Parseval (Full): 55.8
Standard Parseval (Nuclearity): 67.5
Standard Parseval (Relation): 57.6
Standard Parseval (Span): 78.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Can we obtain significant success in RST discourse parsing by using Large Language Models? | Papers | HyperAI