HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

LAMBERT: Layout-Aware (Language) Modeling for information extraction

Łukasz Garncarek Rafał Powalski Tomasz Stanisławek Bartosz Topolski Piotr Halama Michał Turski Filip Graliński

LAMBERT: Layout-Aware (Language) Modeling for information extraction

Abstract

We introduce a simple new approach to the problem of understanding documents where non-trivial layout influences the local semantics. To this end, we modify the Transformer encoder architecture in a way that allows it to use layout features obtained from an OCR system, without the need to re-learn language semantics from scratch. We only augment the input of the model with the coordinates of token bounding boxes, avoiding, in this way, the use of raw images. This leads to a layout-aware language model which can then be fine-tuned on downstream tasks. The model is evaluated on an end-to-end information extraction task using four publicly available datasets: Kleister NDA, Kleister Charity, SROIE and CORD. We show that our model achieves superior performance on datasets consisting of visually rich documents, while also outperforming the baseline RoBERTa on documents with flat layout (NDA (F_{1}) increase from 78.50 to 80.42). Our solution ranked first on the public leaderboard for the Key Information Extraction from the SROIE dataset, improving the SOTA (F_{1})-score from 97.81 to 98.17.

Code Repositories

applicaai/lambert
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
key-information-extraction-on-kleister-ndaLAMBERT (75M)
F1: 80.42

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LAMBERT: Layout-Aware (Language) Modeling for information extraction | Papers | HyperAI