HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Matching the Blanks: Distributional Similarity for Relation Learning

Livio Baldini Soares; Nicholas FitzGerald; Jeffrey Ling; Tom Kwiatkowski

Matching the Blanks: Distributional Similarity for Relation Learning

Abstract

General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris' distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task's training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED.

Code Repositories

jpablou/Matching-The-Blanks-Ths
pytorch
Mentioned in GitHub
diffbot/knowledge-net
tf
Mentioned in GitHub
cypressd1999/FYP_2021
pytorch
Mentioned in GitHub
Soikonomou/albert_final_infer8
pytorch
Mentioned in GitHub
plkmo/BERT-Relation-Extraction
pytorch
Mentioned in GitHub
Soikonomou/albert_final_infer12
pytorch
Mentioned in GitHub
Soikonomou/albert_final
pytorch
Mentioned in GitHub
Soikonomou/bert_new_new
pytorch
Mentioned in GitHub
yi-han/BERT_Relation_Extraction
pytorch
Mentioned in GitHub
hieudepchai/BERT_IE
pytorch
Mentioned in GitHub
dfki-nlp/mtb-bert-em
pytorch
Mentioned in GitHub
Soikonomou/bert_new
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
relation-classification-on-tacred-1MTB Baldini Soares et al. (2019)
F1: 71.5
relation-extraction-on-semeval-2010-task-8BERTEM+MTB
F1: 89.5
relation-extraction-on-tacredBERTEM+MTB
F1: 71.5
F1 (1% Few-Shot): 43.4
F1 (10% Few-Shot): 64.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Matching the Blanks: Distributional Similarity for Relation Learning | Papers | HyperAI