HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Benjamin Heinzerling; Michael Strube

Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation

Abstract

Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works best across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
part-of-speech-tagging-on-udMultiBPEmb
Avg accuracy: 96.62

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation | Papers | HyperAI