HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Parameter-Efficient Transfer Learning for NLP

Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin de Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly

Parameter-Efficient Transfer Learning for NLP

Abstract

Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.

Code Repositories

hmohebbi/TF-Adapter-BERT
tf
Mentioned in GitHub
kpe/bert-for-tf2
tf
Mentioned in GitHub
ZhangYuanhan-AI/NOAH
tf
Mentioned in GitHub
cs-mshah/Adapter-Bert
pytorch
Mentioned in GitHub
TATlong/keras_bert
tf
Mentioned in GitHub
prrao87/fine-grained-sentiment
pytorch
Mentioned in GitHub
google-research/adapter-bert
Official
tf
Mentioned in GitHub
AsaCooperStickland/Bert-n-Pals
pytorch
Mentioned in GitHub
CyberZHG/keras-bert
tf
Mentioned in GitHub
heekhero/DTL
pytorch
Mentioned in GitHub
Davidzhangyuanhan/NOAH
tf
Mentioned in GitHub
osu-mlb/vit_peft_vision
pytorch
Mentioned in GitHub
osu-mlb/petl_vision
pytorch
Mentioned in GitHub
Adapter-Hub/adapter-transformers
pytorch
Mentioned in GitHub
zphang/bert_on_stilts
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-omnibenchmarkAdapter-ViTB/16
Average Top-1 Accuracy: 44.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Parameter-Efficient Transfer Learning for NLP | Papers | HyperAI