HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ByT5: Towards a token-free future with pre-trained byte-to-byte models

Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel

ByT5: Towards a token-free future with pre-trained byte-to-byte models

Abstract

Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.

Benchmarks

BenchmarkMethodologyMetrics
cross-lingual-natural-language-inference-on-4ByT5 Small
Accuracy: 69.1
cross-lingual-natural-language-inference-on-4ByT5 XXL
Accuracy: 83.7
cross-lingual-ner-on-wikiann-nerByT5 XXL
F1: 67.7
cross-lingual-question-answering-on-mlqaByT5 XXL
EM: 54.9
F1: 71.6
cross-lingual-question-answering-on-tydiqaByT5 XXL
EM: 60.0
F1: 75.3
cross-lingual-question-answering-on-tydiqaByT5 (fine-tuned)
EM: 81.9
cross-lingual-question-answering-on-xquadByT5 XXL
EM: 63.6
F1: 79.7
extreme-summarization-on-gem-xsumByT5
BLEU score: 15.3
extreme-summarization-on-gem-xsummT5
BLEU score: 14.3
question-answering-on-tweetqaByT5
ROUGE-L: 75.7
question-answering-on-tweetqaByT5 (small)
BLEU-1: 72.0
question-answering-on-tweetqamT5
BLEU-1: 70.8
ROUGE-L: 74.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp