Natural Language Inference On Multinli

评估指标

Matched
Mismatched

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
UnitedSynT5 (3B)92.6-First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI-
Turing NLR v5 XXL 5.4B (fine-tuned)92.692.4--
T5-XXL 11B (fine-tuned)92.0-Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T592.091.7SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
T5-3B91.491.2Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ALBERT91.3-ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Adv-RoBERTa ensemble91.190.7StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding-
DeBERTa (large)91.191.1DeBERTa: Decoding-enhanced BERT with Disentangled Attention
RoBERTa90.8-RoBERTa: A Robustly Optimized BERT Pretraining Approach
XLNet (single model)90.8-XLNet: Generalized Autoregressive Pretraining for Language Understanding
RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)90.2-LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
T5-Large89.9-Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
PSQ (Chen et al., 2020)89.9-A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
UnitedSynT5 (335M)89.8-First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI-
ERNIE 2.0 Large88.788.8ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
SpanBERT88.1-SpanBERT: Improving Pre-training by Representing and Predicting Spans
ASA + RoBERTa88-Adversarial Self-Attention for Language Understanding
BERT-Large8888FNet: Mixing Tokens with Fourier Transforms
MT-DNN-ensemble87.987.4Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding
Q-BERT (Shen et al., 2020)87.8-Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT-
0 of 67 row(s) selected.
Natural Language Inference On Multinli | SOTA | HyperAI超神经