HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Group Sequence Policy Optimization

Group Sequence Policy Optimization

Abstract

This paper introduces Group Sequence Policy Optimization (GSPO), our stable,efficient, and performant reinforcement learning algorithm for training largelanguage models. Unlike previous algorithms that adopt token-level importanceratios, GSPO defines the importance ratio based on sequence likelihood andperforms sequence-level clipping, rewarding, and optimization. We demonstratethat GSPO achieves superior training efficiency and performance compared to theGRPO algorithm, notably stabilizes Mixture-of-Experts (MoE) RL training, andhas the potential for simplifying the design of RL infrastructure. These meritsof GSPO have contributed to the remarkable improvements in the latest Qwen3models.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Group Sequence Policy Optimization | Papers | HyperAI