HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Incorporating Graph Information in Transformer-based AMR Parsing

Pavlo Vasylenko; Pere-Lluís Huguet Cabot; Abelardo Carlos Martínez Lorenzo; Roberto Navigli

Incorporating Graph Information in Transformer-based AMR Parsing

Abstract

Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at \url{http://www.github.com/sapienzanlp/LeakDistill}.

Code Repositories

sapienzanlp/leakdistill
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
amr-parsing-on-ldc2017t10LeakDistill
Smatch: 86.1
amr-parsing-on-ldc2017t10LeakDistill (base)
Smatch: 84.7
amr-parsing-on-ldc2020t02LeakDistill (base)
Smatch: 83.5
amr-parsing-on-ldc2020t02LeakDistill
Smatch: 84.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Incorporating Graph Information in Transformer-based AMR Parsing | Papers | HyperAI