HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training

Cheng Tan; Jingxuan Wei; Zhangyang Gao; Linzhuang Sun; Siyuan Li; Ruifeng Guo; Bihui Yu; Stan Z. Li

Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training

Abstract

Multimodal reasoning is a challenging task that requires models to reason across multiple modalities to answer questions. Existing approaches have made progress by incorporating language and visual modalities into a two-stage reasoning framework, separating rationale generation from answer inference. However, these approaches often fall short due to the inadequate quality of the generated rationales. In this work, we delve into the importance of rationales in model reasoning. We observe that when rationales are completely accurate, the model's accuracy significantly improves, highlighting the need for high-quality rationale generation. Motivated by this, we propose MC-CoT, a self-consistency training strategy that generates multiple rationales and answers, subsequently selecting the most accurate through a voting process. This approach not only enhances the quality of generated rationales but also leads to more accurate and robust answers. Through extensive experiments, we demonstrate that our approach significantly improves model performance across various benchmarks. Remarkably, we show that even smaller base models, when equipped with our proposed approach, can achieve results comparable to those of larger models, illustrating the potential of our approach in harnessing the power of rationales for improved multimodal reasoning. The code is available at https://github.com/chengtan9907/mc-cot.

Code Repositories

chengtan9907/mc-cot
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
science-question-answering-on-scienceqaMC-CoT F-Large
Avg. Accuracy: 94.88
Grades 1-6: 95.3
Grades 7-12: 94.13
Image Context: 93.75
Language Science: 93.18
Natural Science: 97.47
No Context: 94.49
Social Science: 90.44
Text Context: 96.97
visual-question-answering-on-a-okvqaMC-CoT
MC Accuracy: 71

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training | Papers | HyperAI