HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Measuring and Narrowing the Compositionality Gap in Language Models

Ofir Press Muru Zhang Sewon Min Ludwig Schmidt Noah A. Smith Mike Lewis

Measuring and Narrowing the Compositionality Gap in Language Models

Abstract

We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all sub-problems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multi-hop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning. We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and answers) follow-up questions before answering the initial question. We finally show that self-ask's structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy.

Code Repositories

ofirpress/self-ask
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-bamboogleSelf-ask (GPT-3; davinci-002)
Accuracy: 57.6
question-answering-on-bamboogleSelf-ask (GPT-3; davinci-002) + Google Search
Accuracy: 60.0
question-answering-on-bamboogleGoogle Search
Accuracy: 0
question-answering-on-bamboogleChain-of-Thought (GPT-3; davinci-002)
Accuracy: 46.4
question-answering-on-bamboogleDirect Prompting (GPT-3; davinci-002)
Accuracy: 17.6
question-answering-on-feverSelf-Ask
EM: 64.2
question-answering-on-webquestionsSelf-Ask
EM: 31.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Measuring and Narrowing the Compositionality Gap in Language Models | Papers | HyperAI