HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

TruthfulQA: Measuring How Models Mimic Human Falsehoods

Stephanie Lin; Jacob Hilton; Owain Evans

TruthfulQA: Measuring How Models Mimic Human Falsehoods

Abstract

We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

Code Repositories

lurosenb/sass
Mentioned in GitHub
yizhongw/truthfulqa_reeval
pytorch
Mentioned in GitHub
sylinrl/truthfulqa
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-truthfulqaGPT-2 1.5B
% info: 89.84
% true: 29.50
% true (GPT-judge): 29.87
BLEU: -4.91
BLEURT: -0.25
MC1: 0.22
MC2: 0.39
ROUGE: -9.41
question-answering-on-truthfulqaUnifiedQA 3B
% info: 64.50
% true: 53.86
% true (GPT-judge): 53.24
BLEU: -0.16
BLEURT: 0.08
MC1: 0.19
MC2: 0.35
ROUGE: 1.76
question-answering-on-truthfulqaGPT-3 175B
% info: 97.55
% true: 20.44
% true (GPT-judge): 20.56
BLEU: -17.38
BLEURT: -0.56
MC1: 0.21
MC2: 0.33
ROUGE: -17.75
question-answering-on-truthfulqaGPT-J 6B
% info: 89.96
% true: 26.68
% true (GPT-judge): 27.17
BLEU: -7.58
BLEURT: -0.31
MC1: 0.20
MC2: 0.36
ROUGE: -11.35

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp