HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Executing your Commands via Motion Diffusion in Latent Space

Xin Chen Biao Jiang Wen Liu Zilong Huang Bin Fu Tao Chen Jingyi Yu Gang Yu

Executing your Commands via Motion Diffusion in Latent Space

Abstract

We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.

Code Repositories

chenfengye/motion-latent-diffusion
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
motion-synthesis-on-humanact12MLD
Accuracy: 0.964
FID: 0.077
Multimodality: 2.824
motion-synthesis-on-humanml3dMLD
Diversity: 9.724
FID: 0.473
Multimodality: 2.413
R Precision Top3: 0.772
motion-synthesis-on-kit-motion-languageTEMOS
Diversity: 10.84
FID: 3.717
Multimodality: 0.532
R Precision Top3: 0.687
motion-synthesis-on-kit-motion-languageMLD
Diversity: 10.80
FID: 0.404
Multimodality: 2.192
R Precision Top3: 0.734
motion-synthesis-on-motion-xMLD
Diversity: 10.420
FID: 3.407
MModality: 2.448
TMR-Matching Score: 0.883
TMR-R-Precision Top3: 0.683

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Executing your Commands via Motion Diffusion in Latent Space | Papers | HyperAI