Toxic Comment Classification On Civil

评估指标

AUROC

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
RoBERTa Focal Loss0.9818A benchmark for toxic comment classification on Civil Comments dataset
RoBERTa BCE0.9813A benchmark for toxic comment classification on Civil Comments dataset
DistilBERT0.9804A benchmark for toxic comment classification on Civil Comments dataset
HateBERT0.9791A benchmark for toxic comment classification on Civil Comments dataset
BERTweet0.979A benchmark for toxic comment classification on Civil Comments dataset
AlBERT0.979A benchmark for toxic comment classification on Civil Comments dataset
ResNet + RoBERTa finetune0.97PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
Unfreeze Glove ResNet 440.966A benchmark for toxic comment classification on Civil Comments dataset
Unfreeze Glove ResNet 560.9639A benchmark for toxic comment classification on Civil Comments dataset
Compact Convolutional Transformer (CCT)0.9526A benchmark for toxic comment classification on Civil Comments dataset
Trompt + OpenAI embedding0.947PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
ResNet + OpenAI embedding0.945PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
Trompt + RoBERTa embedding0.885PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
ResNet + RoBERTa embedding0.882PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
LightGBM + RoBERTa embedding0.865PyTorch Frame: A Modular Framework for Multi-Modal Tabular Learning
PaLM 2 (few-shot, k=10)0.8535PaLM 2 Technical Report
PaLM 2 (zero-shot)0.7596PaLM 2 Technical Report
BiLSTM-A benchmark for toxic comment classification on Civil Comments dataset
BiGRU-A benchmark for toxic comment classification on Civil Comments dataset
Freeze Glove ResNet 44-A benchmark for toxic comment classification on Civil Comments dataset
0 of 22 row(s) selected.
Toxic Comment Classification On Civil | SOTA | HyperAI超神经