Command Palette
Search for a command to run...
Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER
Peng-Hsuan Li; Tsu-Jui Fu; Wei-Yun Ma

Abstract
BiLSTM has been prevalently used as a core module for NER in a sequence-labeling setup. State-of-the-art approaches use BiLSTM with additional resources such as gazetteers, language-modeling, or multi-task supervision to further improve NER. This paper instead takes a step back and focuses on analyzing problems of BiLSTM itself and how exactly self-attention can bring improvements. We formally show the limitation of (CRF-)BiLSTM in modeling cross-context patterns for each word -- the XOR limitation. Then, we show that two types of simple cross-structures -- self-attention and Cross-BiLSTM -- can effectively remedy the problem. We test the practical impacts of the deficiency on real-world NER datasets, OntoNotes 5.0 and WNUT 2017, with clear and consistent improvements over the baseline, up to 8.7% on some of the multi-token entity mentions. We give in-depth analyses of the improvements across several aspects of NER, especially the identification of multi-token mentions. This study should lay a sound foundation for future improvements on sequence-labeling NER. (Source codes: https://github.com/jacobvsdanniel/cross-ner)
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| named-entity-recognition-ner-on-ontonotes-v5 | Att-BiLSTM-CNN | F1: 88.4 Precision: 88.71 Recall: 88.11 |
| named-entity-recognition-on-wnut-2017 | Cross-BiLSTM-CNN | F1: 42.85 Precision: 58.28 Recall: 33.92 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.