HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

End-to-end Deep Reinforcement Learning Based Coreference Resolution

{Hongliang Fei Xu Li Dingcheng Li Ping Li}

End-to-end Deep Reinforcement Learning Based Coreference Resolution

Abstract

Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy regularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark.

Benchmarks

BenchmarkMethodologyMetrics
coreference-resolution-on-conll-2012reinforced model + ELMO
Avg F1: 73.8
coreference-resolution-on-ontonotesReinforced + ELMo
F1: 73.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
End-to-end Deep Reinforcement Learning Based Coreference Resolution | Papers | HyperAI