Word Sense Disambiguation On Russe
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| Human Benchmark | 0.805 | RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark | |
| ruT5-large-finetune | 0.735 | - | - |
| RuBERT conversational | 0.729 | - | - |
| RuBERT plain | 0.726 | - | - |
| ruRoberta-large finetune | 0.715 | - | - |
| ruBert-base finetune | 0.706 | - | - |
| Multilingual Bert | 0.69 | - | - |
| ruBert-large finetune | 0.682 | - | - |
| ruT5-base-finetune | 0.682 | - | - |
| SBERT_Large_mt_ru_finetuning | 0.657 | - | - |
| SBERT_Large | 0.654 | - | - |
| RuGPT3Large | 0.647 | - | - |
| RuGPT3Medium | 0.642 | - | - |
| MT5 Large | 0.633 | - | - |
| heuristic majority | 0.595 | Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | - |
| majority_class | 0.587 | Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks | - |
| YaLM 1.0B few-shot | 0.587 | - | - |
| Golden Transformer | 0.587 | - | - |
| Baseline TF-IDF1.1 | 0.57 | RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark | |
| RuGPT3Small | 0.57 | - | - |
0 of 22 row(s) selected.