HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

More Embeddings, Better Sequence Labelers?

Xinyu Wang Yong Jiang Nguyen Bach Tao Wang Zhongqiang Huang Fei Huang Kewei Tu

More Embeddings, Better Sequence Labelers?

Abstract

Recent work proposes a family of contextual embeddings that significantly improves the accuracy of sequence labelers over non-contextual embeddings. However, there is no definite conclusion on whether we can build better sequence labelers by combining different kinds of embeddings in various settings. In this paper, we conduct extensive experiments on 3 tasks over 18 datasets and 8 languages to study the accuracy of sequence labeling with various embedding concatenations and make three observations: (1) concatenating more embedding variants leads to better accuracy in rich-resource and cross-domain settings and some conditions of low-resource settings; (2) concatenating additional contextual sub-word embeddings with contextual character embeddings hurts the accuracy in extremely low-resource settings; (3) based on the conclusion of (1), concatenating additional similar contextual embeddings cannot lead to further improvements. We hope these conclusions can help people build stronger sequence labelers in various settings.

Benchmarks

BenchmarkMethodologyMetrics
chunking-on-conll-2003-englishWang et al., 2020
F1: 92.0
chunking-on-conll-2003-germanWang et al., 2020
F1: 94.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
More Embeddings, Better Sequence Labelers? | Papers | HyperAI