HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

Ikuya Yamada Akari Asai Hiroyuki Shindo Hideaki Takeda Yuji Matsumoto

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

Abstract

Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). Our source code and pretrained representations are available at https://github.com/studio-ousia/luke.

Benchmarks

BenchmarkMethodologyMetrics
common-sense-reasoning-on-recordLUKE 483M
EM: 90.6
F1: 91.2
entity-typing-on-open-entity-1MLMET
F1: 78.2
named-entity-recognition-ner-on-conll-2003LUKE 483M
F1: 94.3
named-entity-recognition-on-conllLUKE(Large)
F1: 95.89
question-answering-on-squad11LUKE (single model)
EM: 90.202
F1: 95.379
question-answering-on-squad11LUKE 483M
F1: 95.4
question-answering-on-squad11LUKE
EM: 90.2
question-answering-on-squad11-devLUKE
EM: 89.8
question-answering-on-squad11-devLUKE 483M
F1: 95
question-answering-on-squad20LUKE 483M
F1: 90.2
question-answering-on-squad20LUKE (single model)
EM: 87.429
F1: 90.163
relation-classification-on-tacred-1LUKE 483M
F1: 72.7
relation-extraction-on-tacredLUKE
F1 (1% Few-Shot): 17.0
F1 (5% Few-Shot): 51.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention | Papers | HyperAI