Command Palette
Search for a command to run...

Abstract
Reward Models (RMs) are critical for improving generation models viaReinforcement Learning (RL), yet the RM scaling paradigm in visual generationremains largely unexplored. It primarily due to fundamental limitations inexisting approaches: CLIP-based RMs suffer from architectural and inputmodality constraints, while prevalent Bradley-Terry losses are fundamentallymisaligned with the next-token prediction mechanism of Vision-Language Models(VLMs), hindering effective scaling. More critically, the RLHF optimizationprocess is plagued by Reward Hacking issue, where models exploit flaws in thereward signal without improving true quality. To address these challenges, weintroduce RewardDance, a scalable reward modeling framework that overcomesthese barriers through a novel generative reward paradigm. By reformulating thereward score as the model's probability of predicting a "yes" token, indicatingthat the generated image outperforms a reference image according to specificcriteria, RewardDance intrinsically aligns reward objectives with VLMarchitectures. This alignment unlocks scaling across two dimensions: (1) ModelScaling: Systematic scaling of RMs up to 26 billion parameters; (2) ContextScaling: Integration of task-specific instructions, reference examples, andchain-of-thought (CoT) reasoning. Extensive experiments demonstrate thatRewardDance significantly surpasses state-of-the-art methods in text-to-image,text-to-video, and image-to-video generation. Crucially, we resolve thepersistent challenge of "reward hacking": Our large-scale RMs exhibit andmaintain high reward variance during RL fine-tuning, proving their resistanceto hacking and ability to produce diverse, high-quality outputs. It greatlyrelieves the mode collapse problem that plagues smaller models.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.