HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset

Jiexing Qi; Shuhao Li; Zhixin Guo; Yusheng Huang; Chenghu Zhou; Weinan Zhang; Xinbing Wang; Zhouhan Lin

Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset

Abstract

Real-world data usually exhibits a long-tailed distribution,with a few frequent labels and a lot of few-shot labels. The study of institution name normalization is a perfect application case showing this phenomenon. There are many institutions worldwide with enormous variations of their names in the publicly available literature. In this work, we first collect a large-scale institution name normalization dataset LoT-insts1, which contains over 25k classes that exhibit a naturally long-tailed distribution. In order to isolate the few-shot and zero-shot learning scenarios from the massive many-shot classes, we construct our test set from four different subsets: many-, medium-, and few-shot sets, as well as a zero-shot open set. We also replicate several important baseline methods on our data, covering a wide range from search-based methods to neural network methods that use the pretrained BERT model. Further, we propose our specially pretrained, BERT-based model that shows better out-of-distribution generalization on few-shot and zero-shot test sets. Compared to other datasets focusing on the long-tailed phenomenon, our dataset has one order of magnitude more training data than the largest existing long-tailed datasets and is naturally long-tailed rather than manually synthesized. We believe it provides an important and different scenario to study this problem. To our best knowledge, this is the first natural language dataset that focuses on long-tailed and open-set classification problems.

Code Repositories

lumia-group/lot-insts
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
long-tail-learning-on-lot-instsCharacter-BERT+RS
Macro-F1: 65.90
text-classification-on-lot-instsNaive Bayes
Accuracy: 72.2
Macro-F1: 50.2
text-classification-on-lot-instsFastText
Accuracy: 74.93
Macro-F1: 44.38
text-classification-on-lot-instsCD-V1
Accuracy: 79.97
Macro-F1: 59.64
text-classification-on-lot-instssCool
Accuracy: 76.72
Macro-F1: 52.41
text-classification-on-lot-instsCharacter-BERT+RS
Accuracy: 83.73
Macro-F1: 65.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset | Papers | HyperAI