
摘要
设计用于数学应用题(Math Word Problems, MWP)的自然语言处理(NLP)求解器一直是研究热点,近年来在测试准确率方面持续取得进展。由于现有求解器在包含单未知数算术应用题的小学水平基准数据集上已达到较高性能,这类问题通常被认为“已解决”,研究重心因而转向更复杂的数学应用题。本文聚焦于四年级及以下年级所教授的英文数学应用题。我们提供了强有力的证据,表明现有MWP求解器在基准数据集上取得高准确率,主要依赖于浅层启发式规则,而非真正理解问题语义。具体而言,我们发现即使求解器无法获取题目所问的具体问题,仍能正确解答大量应用题;同样,将MWP视为“词袋”(bag-of-words)模型的系统也能取得令人惊讶的高准确率。此外,我们构建了一个挑战性数据集SVAMP,该数据集通过在现有数据集样本基础上施加精心设计的语义和句法变化而生成。当前最先进模型在SVAMP上的表现显著下降,这表明即便是最基础的数学应用题,仍存在大量亟待解决的问题。
代码仓库
arkilpatel/SVAMP
官方
pytorch
GitHub 中提及
vedantgaur/symbolic-mwp-reasoning
GitHub 中提及
debjitpaul/refiner
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| math-word-problem-solving-on-asdiv-a | LSTM Seq2Seq with RoBERTa | Execution Accuracy: 76.9 |
| math-word-problem-solving-on-asdiv-a | Graph2Tree with RoBERTa | Execution Accuracy: 82.2 |
| math-word-problem-solving-on-asdiv-a | GTS with RoBERTa | Execution Accuracy: 81.2 |
| math-word-problem-solving-on-mawps | GTS with RoBERTa | Accuracy (%): 88.5 |
| math-word-problem-solving-on-mawps | Graph2Tree with RoBERTa | Accuracy (%): 88.7 |
| math-word-problem-solving-on-svamp | GTS with RoBERTa | Accuracy: 41.0 Execution Accuracy: 41.0 |
| math-word-problem-solving-on-svamp | LSTM Seq2Seq with RoBERTa | Accuracy: 40.3 Execution Accuracy: 40.3 |
| math-word-problem-solving-on-svamp | Graph2Tree with RoBERTa | Accuracy: 43.8 Execution Accuracy: 43.8 |
| math-word-problem-solving-on-svamp | Transformer with RoBERTa | Accuracy: 38.9 Execution Accuracy: 38.9 |