HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Using Similarity Measures to Select Pretraining Data for NER

Xiang Dai; Sarvnaz Karimi; Ben Hachey; Cecile Paris

Using Similarity Measures to Select Pretraining Data for NER

Abstract

Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks. However, the measure and impact of similarity between pretraining data and target task data are left to intuition. We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data. We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar.

Benchmarks

BenchmarkMethodologyMetrics
named-entity-recognition-ner-on-wetlabBiLSTM-CRF with ELMo
F1: 79.62

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Using Similarity Measures to Select Pretraining Data for NER | Papers | HyperAI