HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning

Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for
  Visual Reasoning

Abstract

The remarkable reasoning capability of large language models (LLMs) stemsfrom cognitive behaviors that emerge through reinforcement with verifiablerewards. This work investigates how to transfer this principle to MultimodalLLMs (MLLMs) to unlock advanced visual reasoning. We introduce a two-stageparadigm built on Qwen2.5-VL-7B: a massive linguistic cold-start fine-tuning,followed by multimodal reinforcement learning (RL) spanning nearly 1,000 steps,surpassing all previous open-source efforts in scale. This pioneering workreveals three fundamental insights: 1) Behavior transfer emerges surprisinglyearly in cold start due to linguistic mental imagery. 2) Cold start broadlymemorizes visual behaviors, while RL critically discerns and scales upeffective patterns. 3) Transfer strategically favors high-utility behaviorssuch as visual reflection. Our resulting model, Open-Vision-Reasoner (OVR),achieves state-of-the-art performance on a suite of reasoning benchmarks,including 95.3% on MATH500, 51.8% on MathVision and 54.6% on MathVerse. Werelease our model, data, and training dynamics to catalyze the development ofmore capable, behavior-aligned multimodal reasoners.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning | Papers | HyperAI