HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

Renrui Zhang; Jiaming Han; Chris Liu; Peng Gao; Aojun Zhou; Xiangfei Hu; Shilin Yan; Pan Lu; Hongsheng Li; Yu Qiao

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

Abstract

We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the word tokens at higher transformer layers. Then, a zero-initialized attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With our efficient training, LLaMA-Adapter can generate high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Besides language commands, our approach can be simply extended to multi-modal instructions for learning image-conditioned LLaMA model, which achieves superior reasoning performance on ScienceQA and COCO Caption benchmarks. Furthermore, we also evaluate the zero-initialized attention mechanism for fine-tuning other pre-trained models (ViT, RoBERTa) on traditional vision and language tasks, demonstrating the superior generalization capacity of our approach. Code is released at https://github.com/OpenGVLab/LLaMA-Adapter.

Code Repositories

zrrskywalker/point-bind
pytorch
Mentioned in GitHub
ziyuguo99/point-bind_point-llm
pytorch
Mentioned in GitHub
opengvlab/llama-adapter
Official
pytorch
Mentioned in GitHub
zrrskywalker/llama-adapter
Official
Mentioned in GitHub
alpha-vllm/llama2-accessory
pytorch
Mentioned in GitHub
zihanzhaosjtu/librisqa
Mentioned in GitHub
Lightning-AI/lit-llama
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
music-question-answering-on-musicqaLLaMA Adapter
BERT Score: 0.895
BLEU: 0.273
METEOR: 0.334
ROUGE: 0.413

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention | Papers | HyperAI