HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Transition-based Parsing with Stack-Transformers

Ramon Fernandez Astudillo Miguel Ballesteros Tahira Naseem Austin Blodgett Radu Florian

Transition-based Parsing with Stack-Transformers

Abstract

Modeling the parser state is key to good performance in transition-based parsing. Recurrent Neural Networks considerably improved the performance of transition-based systems by modelling the global state, e.g. stack-LSTM parsers, or local state modeling of contextualized features, e.g. Bi-LSTM parsers. Given the success of Transformer architectures in recent parsing systems, this work explores modifications of the sequence-to-sequence Transformer architecture to model either global or local parser states in transition-based parsing. We show that modifications of the cross attention mechanism of the Transformer considerably strengthen performance both on dependency and Abstract Meaning Representation (AMR) parsing tasks, particularly for smaller models or limited training data.

Code Repositories

IBM/transition-amr-parser
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
amr-parsing-on-ldc2017t10stack-Transformer (IBM)
Smatch: 79.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Transition-based Parsing with Stack-Transformers | Papers | HyperAI