HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

Abstract

Scaling language models with more data, compute and parameters has driven significant progress in natural language processing. For example, thanks to scaling, GPT-3 was able to achieve strong results on in-context learning tasks. However, training these large dense models requires significant amounts of computing resources. In this paper, we propose and develop a family of language models named GLaM (Generalist Language Model), which uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants. The largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.

Benchmarks

BenchmarkMethodologyMetrics
common-sense-reasoning-on-arc-challengeGLaM 64B/64E (0 shot)
Accuracy: 50.3
common-sense-reasoning-on-arc-challengeGLaM 64B/64E (1 shot)
Accuracy: 48.2
common-sense-reasoning-on-arc-easyGLaM 64B/64E (0-shot)
Accuracy: 68.0
common-sense-reasoning-on-arc-easyGLaM (64B/64E) (5-shot)
Accuracy: 74.8
language-modelling-on-lambadaGLaM 62B/64E (One-Shot)
Accuracy: 80.9
question-answering-on-natural-questionsGLaM 62B/64E (One-Shot)
EM: 26.3
question-answering-on-natural-questionsGLaM 62B/64E (Zero-Shot)
EM: 24.7
question-answering-on-natural-questionsGLaM 62B/64E (Few-Shot)
EM: 32.5
question-answering-on-triviaqaGLaM 62B/64E (Few-shot)
EM: 75.8
question-answering-on-triviaqaGLaM 62B/64E (One-shot)
EM: 75.8
question-answering-on-triviaqaGLaM 62B/64E (Zero-shot)
EM: 71.3
question-answering-on-webquestionsGLaM 62B/64E (Zero-Shot)
EM: 15.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts | Papers | HyperAI