HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Efficient and Interpretable Grammatical Error Correction with Mixture of Experts

Muhammad Reza Qorib Alham Fikri Aji Hwee Tou Ng

Efficient and Interpretable Grammatical Error Correction with Mixture of Experts

Abstract

Error type information has been widely used to improve the performance of grammatical error correction (GEC) models, whether for generating corrections, re-ranking them, or combining GEC models. Combining GEC models that have complementary strengths in correcting different error types is very effective in producing better corrections. However, system combination incurs a high computational cost due to the need to run inference on the base systems before running the combination method itself. Therefore, it would be more efficient to have a single model with multiple sub-networks that specialize in correcting different error types. In this paper, we propose a mixture-of-experts model, MoECE, for grammatical error correction. Our model successfully achieves the performance of T5-XL with three times fewer effective parameters. Additionally, our model produces interpretable corrections by also identifying the error type during inference.

Code Repositories

nusnlp/moece
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
grammatical-error-correction-on-bea-2019-testMoECE
F0.5: 74.07
grammatical-error-correction-on-conll-2014MoECE
F0.5: 67.79
Precision: 74.29
Recall: 50.21

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Efficient and Interpretable Grammatical Error Correction with Mixture of Experts | Papers | HyperAI