HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Language Models as Knowledge Embeddings

Xintao Wang; Qianyu He; Jiaqing Liang; Yanghua Xiao

Language Models as Knowledge Embeddings

Abstract

Knowledge embeddings (KE) represent a knowledge graph (KG) by embedding entities and relations into continuous vector spaces. Existing methods are mainly structure-based or description-based. Structure-based methods learn representations that preserve the inherent structure of KGs. They cannot well represent abundant long-tail entities in real-world KGs with limited structural information. Description-based methods leverage textual information and language models. Prior approaches in this direction barely outperform structure-based ones, and suffer from problems like expensive negative sampling and restrictive description demand. In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods. We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation. Experimental results show that LMKE achieves state-of-the-art performance on KE benchmarks of link prediction and triple classification, especially for long-tail entities.

Code Repositories

neph0s/lmke
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
link-prediction-on-wn18rrC-LMKE(bert-base)
Hits@1: 0.523
Hits@10: 0.789
Hits@3: 0.671
MR: 79
MRR: 0.619

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Language Models as Knowledge Embeddings | Papers | HyperAI