Linguistic Acceptability On Cola Dev
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| En-BERT + TDA | 88.6 | Acceptability Judgements via Examining the Topology of Attention Maps | |
| XLM-R (pre-trained) + TDA | 73 | Acceptability Judgements via Examining the Topology of Attention Maps | |
| DeBERTa (large) | 69.5 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | |
| TinyBERT-6 67M | 54 | TinyBERT: Distilling BERT for Natural Language Understanding | |
| Synthesizer (R+V) | 53.3 | Synthesizer: Rethinking Self-Attention in Transformer Models | |
| En-BERT (pre-trained) + TDA | - | Acceptability Judgements via Examining the Topology of Attention Maps |
0 of 6 row(s) selected.