HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving

Yuxuan Tong; Xiwen Zhang; Rui Wang; Ruidong Wu; Junxian He

DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving

Abstract

Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries. Hypothesizing that difficult queries are crucial to learn complex reasoning, we propose Difficulty-Aware Rejection Tuning (DART), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples. Utilizing DART, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4. We fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called DART-MATH. In comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, DART-MATH outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving.

Code Repositories

hkust-nlp/dart-math
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
arithmetic-reasoning-on-gsm8kDART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code)
Accuracy: 82.5
Parameters (Billion): 8
arithmetic-reasoning-on-gsm8kDART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 82.6
Parameters (Billion): 7
arithmetic-reasoning-on-gsm8kDART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code)
Accuracy: 90.4
Parameters (Billion): 70
arithmetic-reasoning-on-gsm8kDART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 88.2
Parameters (Billion): 7
arithmetic-reasoning-on-gsm8kDART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 81.1
Parameters (Billion): 7
arithmetic-reasoning-on-gsm8kDART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 81.1
Parameters (Billion): 8
arithmetic-reasoning-on-gsm8kDART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 89.6
Parameters (Billion): 70
arithmetic-reasoning-on-gsm8kDART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 86.8
Parameters (Billion): 7
math-word-problem-solving-on-mathDART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 45.5
Parameters (Billions): 7
math-word-problem-solving-on-mathDART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code)
Accuracy: 45.3
Parameters (Billions): 8
math-word-problem-solving-on-mathDART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 43.5
Parameters (Billions): 7
math-word-problem-solving-on-mathDART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code)
Accuracy: 54.9
Parameters (Billions): 70
math-word-problem-solving-on-mathDART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 56.1
Parameters (Billions): 70
math-word-problem-solving-on-mathDART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 53.6
Parameters (Billions): 7
math-word-problem-solving-on-mathDART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 46.6
Parameters (Billions): 8
math-word-problem-solving-on-mathDART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 52.9
Parameters (Billions): 7
natural-questions-on-theoremqaDART-Math-Llama3-8B-Uniform (0-shot CoT, w/o code)
Accuracy: 15.4
natural-questions-on-theoremqaDART-Math-Llama3-70B-Uniform (0-shot CoT, w/o code)
Accuracy: 27.4
natural-questions-on-theoremqaDART-Math-Mistral-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 16.4
natural-questions-on-theoremqaDART-Math-Llama3-70B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 28.2
natural-questions-on-theoremqaDART-Math-DSMath-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 32.2
natural-questions-on-theoremqaDART-Math-Llama3-8B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 19.4
natural-questions-on-theoremqaDART-Math-Mistral-7B-Prop2Diff (0-shot CoT, w/o code)
Accuracy: 17.0
natural-questions-on-theoremqaDART-Math-DSMath-7B-Uniform (0-shot CoT, w/o code)
Accuracy: 32.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving | Papers | HyperAI