HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

mGPT: Few-Shot Learners Go Multilingual

Oleh Shliazhko; Alena Fenogenova; Maria Tikhonova; Vladislav Mikhailov; Anastasia Kozlova; Tatiana Shavrina

mGPT: Few-Shot Learners Go Multilingual

Abstract

Recent studies report that autoregressive language models can successfully solve many NLP tasks via zero- and few-shot learning paradigms, which opens up new possibilities for using the pre-trained language models. This paper introduces two autoregressive GPT-like models with 1.3 billion and 13 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism; Deepspeed and Megatron frameworks allow us to parallelize the training and inference steps effectively. The resulting models show performance on par with the recently released XGLM models by Facebook, covering more languages and enhancing NLP possibilities for low resource languages of CIS countries and Russian small nations. We detail the motivation for the choices of the architecture design, thoroughly describe the data preparation pipeline, and train five small versions of the model to choose the most optimal multilingual tokenization strategy. We measure the model perplexity in all covered languages and evaluate it on the wide spectre of multilingual tasks, including classification, generative, sequence labeling and knowledge probing. The models were evaluated with the zero-shot and few-shot methods. Furthermore, we compared the classification tasks with the state-of-the-art multilingual model XGLM. source code and the mGPT XL model are publicly released.

Code Repositories

ai-forever/mgpt
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
cross-lingual-natural-language-inference-on-4mGPT
Accuracy: 40.6
cross-lingual-transfer-on-xcopamGPT
Accuracy: 55.5
few-shot-ner-on-xgluemGPT
Avg F1: 0.85
natural-language-inference-on-xwinomGPT
Accuracy: 0.562
part-of-speech-tagging-on-xgluemGPT
Avg. F1: 0.56

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
mGPT: Few-Shot Learners Go Multilingual | Papers | HyperAI