Command Palette
Search for a command to run...
{Hongliang Fei Xu Li Dingcheng Li Ping Li}

Abstract
Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| coreference-resolution-on-conll-2012 | reinforced model + ELMO | Avg F1: 73.8 |
| coreference-resolution-on-ontonotes | Reinforced + ELMo | F1: 73.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.