HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

Ikuya Yamada; Hiroyuki Shindo; Hideaki Takeda; Yoshiyasu Takefuji

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

Abstract

Named Entity Disambiguation (NED) refers to the task of resolving multiple named entity mentions in a document to their correct references in a knowledge base (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method specifically designed for NED. The proposed method jointly maps words and entities into the same continuous vector space. We extend the skip-gram model by using two models. The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words. By combining contexts based on the proposed embedding with standard NED features, we achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset and 85.2% on the TAC 2010 dataset.

Code Repositories

wikipedia2vec/wikipedia2vec
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
entity-disambiguation-on-aida-conllWikipedia2Vec-GBRT
In-KB Accuracy: 93.1
entity-disambiguation-on-aida-conllWikipedia2Vec
In-KB Accuracy: 91.5
entity-disambiguation-on-tac2010Wikipedia2Vec
Micro Precision: 85.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp