Command Palette
Search for a command to run...
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification
Francesco Barbieri Jose Camacho-Collados Leonardo Neves Luis Espinosa-Anke

Abstract
The experimental landscape in natural language processing for social media is too fragmented. Each year, new shared tasks and datasets are proposed, ranging from classics like sentiment analysis to irony detection or emoji prediction. Therefore, it is unclear what the current state of the art is, as there is no standardized evaluation protocol, neither a strong set of baselines trained on such domain-specific data. In this paper, we propose a new evaluation framework (TweetEval) consisting of seven heterogeneous Twitter-specific classification tasks. We also provide a strong set of baselines as starting point, and compare different language modeling pre-training strategies. Our initial experiments show the effectiveness of starting off with existing pre-trained generic language models, and continue training them on Twitter corpora.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| sentiment-analysis-on-tweeteval | FastText | ALL: 58.1 Emoji: 25.8 Emotion: 65.2 Hate: 50.6 Irony: 63.1 Offensive: 73.4 Sentiment: 62.9 Stance: 65.4 |
| sentiment-analysis-on-tweeteval | RoBERTa-Base | ALL: 61.3 Emoji: 30.9 Emotion: 76.1 Hate: 46.6 Irony: 59.7 Offensive: 79.5 Sentiment: 71.3 Stance: 68 |
| sentiment-analysis-on-tweeteval | SVM | ALL: 53.5 Emoji: 29.3 Emotion: 64.7 Hate: 36.7 Irony: 61.7 Offensive: 52.3 Sentiment: 62.9 Stance: 67.3 |
| sentiment-analysis-on-tweeteval | RoBERTa-Twitter | ALL: 61.0 Emoji: 29.3 Emotion: 72.0 Hate: 49.9 Irony: 65.4 Offensive: 77.1 Sentiment: 69.1 Stance: 66.7 |
| sentiment-analysis-on-tweeteval | LSTM | ALL: 56.5 Emoji: 24.7 Emotion: 66.0 Hate: 52.6 Irony: 62.8 Offensive: 71.7 Sentiment: 58.3 Stance: 59.4 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.