HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty

Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty

Abstract

Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples. Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant context from relevant relations and generate structured output due to the restrictions on fine-tuning the model. Second, LLMs generates responses autoregressively based on probability, which makes the predicted relations lack confidence. In this paper, we assess the capabilities of LLMs in improving the OIE task. Particularly, we propose various in-context learning strategies to enhance LLM's instruction-following ability and a demonstration uncertainty quantification module to enhance the confidence of the generated relations. Our experiments on three OIE benchmark datasets show that our approach holds its own against established supervised methods, both quantitatively and qualitatively.

Benchmarks

BenchmarkMethodologyMetrics
open-information-extraction-on-carbLLaMA-2-13B w/ Selected Demo & Uncertainty
F1: 36.2
open-information-extraction-on-carbGPT-3.5-Turbo w/ Selected Demo & Uncertainty
F1: 52.1
open-information-extraction-on-carbLLaMA-2-70B w/ Selected Demo & Uncertainty
F1: 51.5
open-information-extraction-on-oie2016LLaMA-2-70B w/ Selected Demo & Uncertainty
F1: 65.8
open-information-extraction-on-oie2016GPT-3.5-Turbo w/ Selected Demo & Uncertainty
F1: 65.1
open-information-extraction-on-oie2016LLaMA-2-13B w/ Selected Demo & Uncertainty
F1: 36.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty | Papers | HyperAI