
摘要
本文介绍了一项先进的俄语通用语言理解评估基准——RussianGLUE。近年来,通用语言模型和变压器领域的进展需要开发一种方法论,以广泛诊断和测试这些模型的综合智力技能,包括自然语言推理检测、常识推理以及在不同文本主题或词汇下执行简单逻辑运算的能力。首次为俄语从零开始开发了一个包含九个任务的基准,其收集和组织方式类似于SuperGLUE方法论。我们提供了基线结果、人类水平的评估、一个用于评估模型的开源框架(https://github.com/RussianNLP/RussianSuperGLUE),以及俄语变压器模型的整体排行榜。此外,我们展示了多语言模型在适应性诊断测试集中的初步比较结果,并提出了进一步扩展或独立于语言评估最先进模型的第一步。
代码仓库
RussianNLP/MOROCCO
pytorch
GitHub 中提及
RussianNLP/RussianSuperGLUE
官方
pytorch
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| common-sense-reasoning-on-parus | Baseline TF-IDF1.1 | Accuracy: 0.486 |
| common-sense-reasoning-on-parus | Human Benchmark | Accuracy: 0.982 |
| common-sense-reasoning-on-rucos | Human Benchmark | Average F1: 0.93 EM : 0.89 |
| common-sense-reasoning-on-rucos | Baseline TF-IDF1.1 | Average F1: 0.26 EM : 0.252 |
| common-sense-reasoning-on-rwsd | Baseline TF-IDF1.1 | Accuracy: 0.662 |
| common-sense-reasoning-on-rwsd | Human Benchmark | Accuracy: 0.84 |
| natural-language-inference-on-lidirus | Human Benchmark | MCC: 0.626 |
| natural-language-inference-on-lidirus | Baseline TF-IDF1.1 | MCC: 0.06 |
| natural-language-inference-on-rcb | Human Benchmark | Accuracy: 0.702 Average F1: 0.68 |
| natural-language-inference-on-rcb | Baseline TF-IDF1.1 | Accuracy: 0.441 Average F1: 0.301 |
| natural-language-inference-on-terra | Human Benchmark | Accuracy: 0.92 |
| natural-language-inference-on-terra | Baseline TF-IDF1.1 | Accuracy: 0.471 |
| question-answering-on-danetqa | Human Benchmark | Accuracy: 0.915 |
| question-answering-on-danetqa | Baseline TF-IDF1.1 | Accuracy: 0.621 |
| reading-comprehension-on-muserc | Baseline TF-IDF1.1 | Average F1: 0.587 EM : 0.242 |
| reading-comprehension-on-muserc | Human Benchmark | Average F1: 0.806 EM : 0.42 |
| word-sense-disambiguation-on-russe | Baseline TF-IDF1.1 | Accuracy: 0.57 |
| word-sense-disambiguation-on-russe | Human Benchmark | Accuracy: 0.805 |