HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

mT5: A massively multilingual pre-trained text-to-text transformer

Linting Xue Noah Constant Adam Roberts Mihir Kale Rami Al-Rfou Aditya Siddhant Aditya Barua Colin Raffel

mT5: A massively multilingual pre-trained text-to-text transformer

Abstract

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.

Code Repositories

KoshiroSato/Flask_NLP_App
Mentioned in GitHub
MorenoLaQuatra/bart-it
pytorch
Mentioned in GitHub
manshri/tesum
pytorch
Mentioned in GitHub
google-research/multilingual-t5
Official
tf
Mentioned in GitHub
huggingface/transformers
pytorch
Mentioned in GitHub
google-research/byt5
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
common-sense-reasoning-on-parusMT5 Large
Accuracy: 0.504
common-sense-reasoning-on-rucosMT5 Large
Average F1: 0.57
EM : 0.562
common-sense-reasoning-on-rwsdMT5 Large
Accuracy: 0.669
natural-language-inference-on-lidirusMT5 Large
MCC: 0.061
natural-language-inference-on-rcbMT5 Large
Accuracy: 0.454
Average F1: 0.366
natural-language-inference-on-terraMT5 Large
Accuracy: 0.561
question-answering-on-danetqaMT5 Large
Accuracy: 0.657
reading-comprehension-on-musercMT5 Large
Average F1: 0.844
EM : 0.543
zero-shot-cross-lingual-transfer-on-xtrememT5
Avg: 40.9
Question Answering: 73.6
Sentence Retrieval: NA
Sentence-pair Classification: 89.8
Structured Prediction: NA

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
mT5: A massively multilingual pre-trained text-to-text transformer | Papers | HyperAI