Command Palette
Search for a command to run...
Enwei Zhu; Jinpeng Li

Abstract
Neural named entity recognition (NER) models may easily encounter the over-confidence issue, which degrades the performance and calibration. Inspired by label smoothing and driven by the ambiguity of boundary annotation in NER engineering, we propose boundary smoothing as a regularization technique for span-based neural NER models. It re-assigns entity probabilities from annotated spans to the surrounding ones. Built on a simple but strong baseline, our model achieves results better than or competitive with previous state-of-the-art systems on eight well-known NER benchmarks. Further empirical analysis suggests that boundary smoothing effectively mitigates over-confidence, improves model calibration, and brings flatter neural minima and more smoothed loss landscapes.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| chinese-named-entity-recognition-on-msra | Baseline + BS | F1: 96.26 |
| chinese-named-entity-recognition-on-ontonotes | Baseline + BS | F1: 82.83 |
| chinese-named-entity-recognition-on-resume | Baseline + BS | F1: 96.66 |
| chinese-named-entity-recognition-on-weibo-ner | Baseline + BS | F1: 72.66 |
| named-entity-recognition-ner-on-conll-2003 | Baseline + BS | F1: 93.65 |
| named-entity-recognition-ner-on-ontonotes-v5 | Baseline + BS | F1: 91.74 |
| nested-named-entity-recognition-on-ace-2004 | Baseline + BS | F1: 87.98 |
| nested-named-entity-recognition-on-ace-2005 | Baseline + BS | F1: 87.15 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.