HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang Seungone Kim Seonghyeon Ye Doyoung Kim Lajanugen Logeswaran Moontae Lee Kyungjae Lee Minjoon Seo

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Abstract

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks. Previous work has shown that scaling the number of training tasks is the key component in making stronger MT LMs. In this work, we report an unexpected finding that an expert LM fine-tuned on just a single task can outperform an MT LM trained with 300+ different tasks on 11 different unseen datasets and on 13 datasets of the BIG-bench benchmark by a mean accuracy of 3.20% and 1.29%, respectively. This finding casts doubt on the previously held belief that simply scaling the number of tasks makes stronger MT LMs. Leveraging this finding, we further show that this distributed approach of training a separate expert LM per training task instead of a single MT LM for zero-shot inference possesses many benefits including (1) avoiding negative task transfer that often occurs during instruction tuning, (2) being able to continually learn new tasks without having to re-train on previous tasks to avoid catastrophic forgetting, and (3) showing compositional capabilities when merging individual experts together. The code is available at https://github.com/joeljang/ELM.

Code Repositories

joeljang/elm
Official
pytorch
Mentioned in GitHub
joeljang/rlphf
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
common-sense-reasoning-on-winograndeRoE-3B
Accuracy: 61.60
coreference-resolution-on-winograd-schemaRoE-3B
Accuracy: 62.21
natural-language-inference-on-anli-testRoE-3B
A1: 35.49
A2: 34.64
A3: 31.22
natural-language-inference-on-rteRoE-3B
Accuracy: 64.01
question-answering-on-copaRoE-3B
Accuracy: 79.25
question-answering-on-storyclozeRoE-3B
Accuracy: 86.33
word-sense-disambiguation-on-words-in-contextRoE-3B
Accuracy: 52.97

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Exploring the Benefits of Training Expert Language Models over Instruction Tuning | Papers | HyperAI