Command Palette
Search for a command to run...
Handwritten Text Recognition On Belfort
Metrics
CER (%)
WER (%)
Results
Performance results of various models on this benchmark
| Paper Title | Repository | |||
|---|---|---|---|---|
| PyLaia (human transcriptions + random split) | 10.54 | 28.11 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (human transcriptions + agreement-based split) | 5.57 | 19.12 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (rover consensus + agreement-based split) | 4.95 | 17.08 | Handwritten Text Recognition from Crowdsourced Annotations | - |
| PyLaia (all transcriptions + agreement-based split) | 4.34 | 15.14 | Handwritten Text Recognition from Crowdsourced Annotations | - |
0 of 4 row(s) selected.