HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Large Language Models are Zero-Shot Reasoners

Takeshi Kojima; Shixiang Shane Gu; Machel Reid; Yutaka Matsuo; Yusuke Iwasawa

Large Language Models are Zero-Shot Reasoners

Abstract

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.

Code Repositories

zongqianwu/st-cot
pytorch
Mentioned in GitHub
kojima-takeshi188/zero_shot_cot
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
arithmetic-reasoning-on-gsm8kText-davinci-002-175B (zero-plus-few-Shot-cot (8 samples))
Accuracy: 51.5
Parameters (Billion): 175
arithmetic-reasoning-on-gsm8kPaLM 540B (few-shot)
Accuracy: 17.9
Parameters (Billion): 540
arithmetic-reasoning-on-gsm8kFinetuned GPT-3 175B + verifier
Accuracy: 55.0
Parameters (Billion): 175
arithmetic-reasoning-on-gsm8ktext-davinci-002 175B (0-shot, CoT)
Accuracy: 40.7
Parameters (Billion): 175
arithmetic-reasoning-on-gsm8kText-davinci-002-175B (0-shot)
Accuracy: 10.4
Parameters (Billion): 175
arithmetic-reasoning-on-gsm8kPaLM-540B (few-Shot-cot)
Accuracy: 58.1
Parameters (Billion): 540
arithmetic-reasoning-on-gsm8ktext-davinci-002 175B (2-shot, CoT)
Accuracy: 41.3
Parameters (Billion): 175
arithmetic-reasoning-on-multiarithText-davinci-002 (175B) (zero-shot)
Accuracy: 17.7
arithmetic-reasoning-on-multiarithText-davinci-002 (175B)(zero-shot-cot)
Accuracy: 78.7
common-sense-reasoning-on-recordGPT-3 175B (one-shot)
F1: 90.2
math-word-problem-solving-on-svampPaLM (zero-shot)
Execution Accuracy: 58.8
math-word-problem-solving-on-svampPaLM (zero-shot, CoT)
Execution Accuracy: 62.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Large Language Models are Zero-Shot Reasoners | Papers | HyperAI