Grammatical Error Correction On Ua Gec
评估指标
F0.5
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| Llama + 1M BT + gold | 74.09 | To Err Is Human, but Llamas Can Learn It Too | |
| mBART-based model with synthetic data | 68.17 | Comparative study of models trained on synthetic data for Ukrainian grammatical error correction | - |
| mT5 large + 10M synth | 68.09 | A Low-Resource Approach to the Grammatical Error Correction of Ukrainian | - |
| RedPenNet | 67.71 | RedPenNet for Grammatical Error Correction: Outputs to Tokens, Attentions to Spans | |
| ChatGPT (zero-shot) | 27.4 | GPT-3.5 for Grammatical Error Correction | - |
0 of 5 row(s) selected.