HyperAIHyperAI

Command Palette

Search for a command to run...

2 months ago

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Yuntao Bai Andy Jones Kamal Ndousse Amanda Askell Anna Chen Nova DasSarma et al

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Abstract

We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.

Code Repositories

miaoyuchun/inform
pytorch
Mentioned in GitHub
ganjinzero/rrhf
pytorch
Mentioned in GitHub
ethz-spylab/rlhf_trojan_competition
pytorch
Mentioned in GitHub
anthropics/hh-rlhf
Official
Mentioned in GitHub

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback | Papers | HyperAI