Command Palette
Search for a command to run...
Xin Li Lidong Bing Wenxuan Zhang Wai Lam

Abstract
In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task. Specifically, we build a series of simple yet insightful neural baselines to deal with E2E-ABSA. The experimental results show that even with a simple linear classification layer, our BERT-based architecture can outperform state-of-the-art works. Besides, we also standardize the comparative study by consistently utilizing a hold-out validation dataset for model selection, which is largely ignored by previous works. Therefore, our work can serve as a BERT-based benchmark for E2E-ABSA.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| aspect-based-sentiment-analysis-on-semeval-5 | BERT-E2E-ABSA | F1: 61.12 |
| aspect-based-sentiment-analysis-on-semeval-6 | BERT-E2E-ABSA | F1: 61.12 |
| sentiment-analysis-on-semeval-2014-task-4 | BERT-E2E-ABSA | F1: 61.12 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.