HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

Thibault Févry Nicholas FitzGerald Livio Baldini Soares Tom Kwiatkowski

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

Abstract

In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.

Benchmarks

BenchmarkMethodologyMetrics
entity-linking-on-aida-conllFévry et al. (2020b)
Micro-F1 strong: 76.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking | Papers | HyperAI