Handwritten Text Recognition On Belfort
评估指标
CER (%)
WER (%)
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | |||
|---|---|---|---|---|
| PyLaia (human transcriptions + random split) | 10.54 | 28.11 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (human transcriptions + agreement-based split) | 5.57 | 19.12 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (rover consensus + agreement-based split) | 4.95 | 17.08 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (all transcriptions + agreement-based split) | 4.34 | 15.14 | Handwritten Text Recognition from Crowdsourced Annotations | - |
0 of 4 row(s) selected.