Command Palette
Search for a command to run...
ByT5: Towards a token-free future with pre-trained byte-to-byte models
Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel

Abstract
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| cross-lingual-natural-language-inference-on-4 | ByT5 Small | Accuracy: 69.1 |
| cross-lingual-natural-language-inference-on-4 | ByT5 XXL | Accuracy: 83.7 |
| cross-lingual-ner-on-wikiann-ner | ByT5 XXL | F1: 67.7 |
| cross-lingual-question-answering-on-mlqa | ByT5 XXL | EM: 54.9 F1: 71.6 |
| cross-lingual-question-answering-on-tydiqa | ByT5 XXL | EM: 60.0 F1: 75.3 |
| cross-lingual-question-answering-on-tydiqa | ByT5 (fine-tuned) | EM: 81.9 |
| cross-lingual-question-answering-on-xquad | ByT5 XXL | EM: 63.6 F1: 79.7 |
| extreme-summarization-on-gem-xsum | ByT5 | BLEU score: 15.3 |
| extreme-summarization-on-gem-xsum | mT5 | BLEU score: 14.3 |
| question-answering-on-tweetqa | ByT5 | ROUGE-L: 75.7 |
| question-answering-on-tweetqa | ByT5 (small) | BLEU-1: 72.0 |
| question-answering-on-tweetqa | mT5 | BLEU-1: 70.8 ROUGE-L: 74.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.