HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
语言可接受性
Linguistic Acceptability On Cola
Linguistic Acceptability On Cola
评估指标
Accuracy
MCC
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
MCC
Paper Title
Repository
En-BERT + TDA + PCA
88.6%
-
Acceptability Judgements via Examining the Topology of Attention Maps
BERT+TDA
88.2%
0.726
Can BERT eat RuCoLA? Topological Data Analysis to Explain
RoBERTa+TDA
87.3%
0.695
Can BERT eat RuCoLA? Topological Data Analysis to Explain
deberta-v3-base+tasksource
87.15%
-
tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation
RoBERTa-large 355M + Entailment as Few-shot Learner
86.4%
-
Entailment as Few-Shot Learner
LTG-BERT-base 98M
82.7
-
Not all layers are equally as important: Every Layer Counts BERT
-
ELC-BERT-base 98M
82.6
-
Not all layers are equally as important: Every Layer Counts BERT
-
En-BERT + TDA
82.1%
0.565
Acceptability Judgements via Examining the Topology of Attention Maps
FNet-Large
78%
-
FNet: Mixing Tokens with Fourier Transforms
LTG-BERT-small 24M
77.6
-
Not all layers are equally as important: Every Layer Counts BERT
-
ELC-BERT-small 24M
76.1
-
Not all layers are equally as important: Every Layer Counts BERT
-
T5-11B
70.8%
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
StructBERTRoBERTa ensemble
69.2%
-
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
ALBERT
69.1%
-
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
FLOATER-large
69%
-
Learning to Encode Position for Transformer with Continuous Dynamical Model
XLNet (single model)
69%
-
XLNet: Generalized Autoregressive Pretraining for Language Understanding
RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
68.6%
-
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
MT-DNN
68.4%
-
Multi-Task Deep Neural Networks for Natural Language Understanding
ELECTRA
68.2%
-
-
-
RoBERTa (ensemble)
67.8%
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
0 of 43 row(s) selected.
Previous
Next
Linguistic Acceptability On Cola | SOTA | HyperAI超神经