HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

Martin Schmitt; Leonardo F. R. Ribeiro; Philipp Dufter; Iryna Gurevych; Hinrich Schütze

Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

Abstract

We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.

Benchmarks

BenchmarkMethodologyMetrics
data-to-text-generation-on-webnlgGraformer
BLEU: 61.15
kg-to-text-generation-on-agendaGraformer
BLEU: 17.80

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs | Papers | HyperAI