Command Palette
Search for a command to run...
Marek Rei; Gamal K.O. Crichton; Sampo Pyysalo

Abstract
Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| grammatical-error-detection-on-fce | Bi-LSTM + charattn | F0.5: 41.88 |
| part-of-speech-tagging-on-penn-treebank | Bi-LSTM + charattn | Accuracy: 97.27 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.