HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

Pan Lu; Swaroop Mishra; Tony Xia; Liang Qiu; Kai-Wei Chang; Song-Chun Zhu; Oyvind Tafjord; Peter Clark; Ashwin Kalyan

Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

Abstract

When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA), a new benchmark that consists of ~21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations. We further design language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering ScienceQA questions. ScienceQA demonstrates the utility of CoT in language models, as CoT improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA. We also explore the upper bound for models to leverage explanations by feeding those in the input; we observe that it improves the few-shot performance of GPT-3 by 18.96%. Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data. The data and code are available at https://scienceqa.github.io.

Code Repositories

lupantech/ScienceQA
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
science-question-answering-on-scienceqaGPT-3 (QCM→A, 2-shot)
Avg. Accuracy: 73.97
Grades 1-6: 76.80
Grades 7-12: 68.89
Image Context: 67.28
Language Science: 76.00
Natural Science: 74.64
No Context: 77.42
Social Science: 69.74
Text Context: 74.44
science-question-answering-on-scienceqaGPT-3 - CoT(QCM→AE, 2-shot)
Avg. Accuracy: 74.61
Grades 1-6: 78.49
Grades 7-12: 67.63
Image Context: 66.09
Language Science: 77.55
Natural Science: 76.60
No Context: 79.58
Social Science: 65.92
Text Context: 75.51
science-question-answering-on-scienceqaGPT-3 - CoT (QCM→ALE , 2-shot)
Avg. Accuracy: 75.17
Grades 1-6: 78.23
Grades 7-12: 69.68
Image Context: 67.43
Language Science: 78.09
Natural Science: 75.44
No Context: 79.93
Social Science: 70.87
Text Context: 74.68
science-question-answering-on-scienceqaUnifiedQA-BASE - CoT (QCM→ALE)
Avg. Accuracy: 74.11
Grades 1-6: 77.06
Grades 7-12: 68.82
Image Context: 66.53
Language Science: 78.91
Natural Science: 71.00
No Context: 81.81
Social Science: 76.04
Text Context: 66.42

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering | Papers | HyperAI