Command Palette
Search for a command to run...
Zhuosheng Zhang; Aston Zhang; Mu Li; Hai Zhao; George Karypis; Alex Smola

Abstract
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| science-question-answering-on-scienceqa | Multimodal CoT | Avg. Accuracy: 91.68 Grades 1-6: 92.44 Grades 7-12: 90.31 Image Context: 88.80 Language Science: 90.82 Natural Science: 95.91 No Context: 92.89 Social Science: 82.00 Text Context: 95.26 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.