HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors

Kai Zhang Bernal Jiménez Gutiérrez Yu Su

Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors

Abstract

Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting. However, even advanced instruction-tuned LLMs still fail to outperform small LMs on relation extraction (RE), a fundamental information extraction task. We hypothesize that instruction-tuning has been unable to elicit strong RE capabilities in LLMs due to RE's low incidence in instruction-tuning datasets, making up less than 1% of all tasks (Wang et al., 2022). To address this limitation, we propose QA4RE, a framework that aligns RE with question answering (QA), a predominant task in instruction-tuning datasets. Comprehensive zero-shot RE experiments over four datasets with two series of instruction-tuned LLMs (six LLMs in total) demonstrate that our QA4RE framework consistently improves LLM performance, strongly verifying our hypothesis and enabling LLMs to outperform strong zero-shot baselines by a large margin. Additionally, we provide thorough experiments and discussions to show the robustness, few-shot effectiveness, and strong transferability of our QA4RE framework. This work illustrates a promising way of adapting LLMs to challenging and underrepresented tasks by aligning these tasks with more common instruction-tuning tasks like QA.

Code Repositories

osu-nlp-group/qa4re
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
relation-extraction-on-re-tacredLLM-QA4RE (XXLarge)
F1: 66.5
relation-extraction-on-semeval-2010-task-8-1LLM-QA4RE (XXLarge)
F1: 43.5
relation-extraction-on-tacredLLM-QA4RE (XXLarge)
F1: 52.2
relation-extraction-on-tacred-revisitedLLM-QA4RE (XXLarge)
F1: 53.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors | Papers | HyperAI