HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

Ori Yoran Tomer Wolfson Ben Bogin Uri Katz Daniel Deutch Jonathan Berant

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

Abstract

Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer. Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermediate steps themselves are discarded. While such approaches improve performance, they do not consider the relations between intermediate steps across chains and do not provide a unified explanation for the predicted answer. We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought, rather than aggregating their answers. MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer. MCR outperforms strong baselines on 7 multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations exhibit high quality, enabling humans to verify its answers.

Code Repositories

oriyor/reasoning-on-cots
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-bamboogleMCR (code-davinci-002) + Google Search
Accuracy: 66.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Answering Questions by Meta-Reasoning over Multiple Chains of Thought | Papers | HyperAI