HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

AMR Parsing via Graph-Sequence Iterative Inference

Deng Cai Wai Lam

AMR Parsing via Graph-Sequence Iterative Inference

Abstract

We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph. At each time step, our model performs multiple rounds of attention, reasoning, and composition that aim to answer two critical questions: (1) which part of the input \textit{sequence} to abstract; and (2) where in the output \textit{graph} to construct the new concept. We show that the answers to these two questions are mutually causalities. We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy. Our experimental results significantly outperform all previously reported \textsc{Smatch} scores by large margins. Remarkably, without the help of any large-scale pre-trained language model (e.g., BERT), our model already surpasses previous state-of-the-art using BERT. With the help of BERT, we can push the state-of-the-art results to 80.2\% on LDC2017T10 (AMR 2.0) and 75.4\% on LDC2014T12 (AMR 1.0).

Code Repositories

jcyk/AMR-gs
Official
pytorch
Mentioned in GitHub
ibm/graph_ensemble_learning
pytorch
Mentioned in GitHub
bjascob/amrlib
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
amr-parsing-on-ldc2014t12-1AMR Parsing via Graph-Sequence Iterative Inference
F1 Full: 75.4
amr-parsing-on-ldc2017t10Cai and Lam
Smatch: 80.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
AMR Parsing via Graph-Sequence Iterative Inference | Papers | HyperAI