HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Syntactically Look-Ahead Attention Network for Sentence Compression

Hidetaka Kamigaito; Manabu Okumura

Syntactically Look-Ahead Attention Network for Sentence Compression

Abstract

Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words. In sequence-to-sequence (Seq2Seq) based models, the decoder unidirectionally decides to retain or delete words. Thus, it cannot usually explicitly capture the relationships between decoded words and unseen words that will be decoded in the future time steps. Therefore, to avoid generating ungrammatical sentences, the decoder sometimes drops important words in compressing sentences. To solve this problem, we propose a novel Seq2Seq model, syntactically look-ahead attention network (SLAHAN), that can generate informative summaries by explicitly tracking both dependency parent and child words during decoding and capturing important words that will be decoded in the future. The results of the automatic evaluation on the Google sentence compression dataset showed that SLAHAN achieved the best kept-token-based-F1, ROUGE-1, ROUGE-2 and ROUGE-L scores of 85.5, 79.3, 71.3 and 79.1, respectively. SLAHAN also improved the summarization performance on longer sentences. Furthermore, in the human evaluation, SLAHAN improved informativeness without losing readability.

Code Repositories

kamigaito/SLAHAN
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
sentence-compression-on-google-datasetSLAHAN (LSTM+syntactic-information)
CR: 0.407
F1: 0.855

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Syntactically Look-Ahead Attention Network for Sentence Compression | Papers | HyperAI