Command Palette
Search for a command to run...
Cross Lingual Question Answering On Mlqa
Metrics
EM
F1
Results
Performance results of various models on this benchmark
| Paper Title | Repository | |||
|---|---|---|---|---|
| ByT5 XXL | 54.9 | 71.6 | ByT5: Towards a token-free future with pre-trained byte-to-byte models | |
| Decoupled | - | 53.1 | Rethinking embedding coupling in pre-trained language models | |
| Coupled | 37.3 | 53.1 | Rethinking embedding coupling in pre-trained language models |
0 of 3 row(s) selected.