
摘要
大型语言模型(LLMs)在数学推理任务中展现出涌现能力,学术界对通过监督微调(Supervised Fine-Tuning, SFT)提升开源LLMs数学推理能力的关注日益增加。本文旨在探索一种通用的监督数据构建策略,以优化并拓展模型的数学推理能力。首先,我们通过识别推理路径的最小最优集合,确定了推理路径增强能力的边界。其次,我们验证了通过混合对应类型数据的最小最优集合(Mix of Minimal Optimal Sets, MMOS),可实现模型不同能力的累积性提升;在此策略下,我们的模型MMOS在多个基础模型上均取得了当前最优(SOTA)性能,且数据构建成本显著降低。此外,我们指出GSM-HARD数据集实际上并不具备真正难度,当前的LLMs已不再缺乏数值鲁棒性。同时,我们提出了一种自动化问题生成器,可用于模型鲁棒性测试及教育应用场景。本文代码与数据已公开,详见:https://github.com/cyzhh/MMOS。
代码仓库
cyzhh/MMOS
官方
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| arithmetic-reasoning-on-gsm8k | MMOS-CODE-7B(0-shot) | Accuracy: 73.9 Parameters (Billion): 7 |
| arithmetic-reasoning-on-gsm8k | MMOS-DeepSeekMath-7B(0-shot) | Accuracy: 80.5 Parameters (Billion): 7 |
| arithmetic-reasoning-on-gsm8k | MMOS-CODE-34B(0-shot) | Accuracy: 80.4 Parameters (Billion): 34 |
| arithmetic-reasoning-on-gsm8k | MMOS-DeepSeekMath-7B(0-shot,k=50) | Accuracy: 87.2 Parameters (Billion): 7 |
| automated-theorem-proving-on-minif2f-test | MMOS-DeepSeekMath-7B | ITP: Lean Pass@1: 28.3 cumulative: 28.3 |
| math-word-problem-solving-on-asdiv-a | MMOS-CODE-34B(0-shot) | Execution Accuracy: 85.1 |
| math-word-problem-solving-on-asdiv-a | MMOS-CODE-7B(0-shot) | Execution Accuracy: 78.6 |
| math-word-problem-solving-on-asdiv-a | MMOS-DeepSeekMath-7B(0-shot) | Execution Accuracy: 87.6 |
| math-word-problem-solving-on-math | MMOS-DeepSeekMath-7B(0-shot) | Accuracy: 55.0 Parameters (Billions): 7 |
| math-word-problem-solving-on-math | MMOS-CODE-7B(0-shot) | Accuracy: 44.3 Parameters (Billions): 7 |
| math-word-problem-solving-on-math | MMOS-CODE-34B(0-shot) | Accuracy: 49.5 Parameters (Billions): 34 |
| math-word-problem-solving-on-math | MMOS-DeepSeekMath-7B(0-shot,k=50) | Accuracy: 63.7 Parameters (Billions): 7 |
| math-word-problem-solving-on-svamp | MMOS-CODE-7B(0-shot) | Execution Accuracy: 76.4 |
| math-word-problem-solving-on-svamp | MMOS-DeepSeekMath-7B(0-shot) | Execution Accuracy: 79.3 |
| math-word-problem-solving-on-svamp | MMOS-CODE-34B(0-shot) | Execution Accuracy: 80.6 |