HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Transcending Scaling Laws with 0.1% Extra Compute

Yi Tay; Jason Wei; Hyung Won Chung; Vinh Q. Tran; David R. So; Siamak Shakeri; Xavier Garcia; Huaixiu Steven Zheng; Jinfeng Rao; Aakanksha Chowdhery; Denny Zhou; Donald Metzler; Slav Petrov; Neil Houlsby; Quoc V. Le; Mostafa Dehghani

Transcending Scaling Laws with 0.1% Extra Compute

Abstract

Scaling language models improves performance but comes with significant computational costs. This paper proposes UL2R, a method that substantially improves existing language models and their scaling curves with a relatively tiny amount of extra compute. The key idea is to continue training a state-of-the-art large language model (e.g., PaLM) on a few more steps with UL2's mixture-of-denoiser objective. We show that, with almost negligible extra computational costs and no new sources of data, we are able to substantially improve the scaling properties of large language models on downstream metrics. In this paper, we continue training PaLM with UL2R, introducing a new set of models at 8B, 62B, and 540B scale which we call U-PaLM. Impressively, at 540B scale, we show an approximately 2x computational savings rate where U-PaLM achieves the same performance as the final PaLM 540B model at around half its computational budget (i.e., saving $\sim$4.4 million TPUv4 hours). We further show that this improved scaling curve leads to 'emergent abilities' on challenging BIG-Bench tasks -- for instance, U-PaLM does much better than PaLM on some tasks or demonstrates better quality at much smaller scale (62B as opposed to 540B). Overall, we show that U-PaLM outperforms PaLM on many few-shot setups, i.e., English NLP tasks (e.g., commonsense reasoning, question answering), reasoning tasks with chain-of-thought (e.g., GSM8K), multilingual tasks (MGSM, TydiQA), MMLU and challenging BIG-Bench tasks. Finally, we provide qualitative examples showing the new capabilities of U-PaLM for single and multi-span infilling.

Benchmarks

BenchmarkMethodologyMetrics
arithmetic-reasoning-on-gsm8kU-PaLM
Accuracy: 58.5
Parameters (Billion): 540
cross-lingual-question-answering-on-tydiqaU-PaLM-540B (CoT)
EM: 54.6
cross-lingual-question-answering-on-tydiqaU-PaLM 62B (fine-tuned)
EM: 78.4
F1: 88.5
multi-task-language-understanding-on-mgsmU-PaLM 540B (CoT)
Average (%): 49.9
question-answering-on-strategyqaPaLM 540B
Accuracy: 76.4
question-answering-on-strategyqaU-PaLM 540B
Accuracy: 76.6
question-answering-on-strategyqaMinerva 540B
Accuracy: 61.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Transcending Scaling Laws with 0.1% Extra Compute | Papers | HyperAI