HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis

Rishabh Dabral Muhammad Hamza Mughal Vladislav Golyanik Christian Theobalt

MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis

Abstract

Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.

Benchmarks

BenchmarkMethodologyMetrics
motion-synthesis-on-aistMoFusion
Beat alignment score: 0.253
FID: 50.31

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis | Papers | HyperAI