HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models

Andreas Blattmann Robin Rombach Huan Ling Tim Dockhorn Seung Wook Kim Sanja Fidler Karsten Kreis

Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models

Abstract

Latent Diffusion Models (LDMs) enable high-quality image synthesis while avoiding excessive compute demands by training a diffusion model in a compressed lower-dimensional latent space. Here, we apply the LDM paradigm to high-resolution video generation, a particularly resource-intensive task. We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded image sequences, i.e., videos. Similarly, we temporally align diffusion model upsamplers, turning them into temporally consistent video super resolution models. We focus on two relevant real-world applications: Simulation of in-the-wild driving data and creative content creation with text-to-video modeling. In particular, we validate our Video LDM on real driving videos of resolution 512 x 1024, achieving state-of-the-art performance. Furthermore, our approach can easily leverage off-the-shelf pre-trained image LDMs, as we only need to train a temporal alignment model in that case. Doing so, we turn the publicly available, state-of-the-art text-to-image LDM Stable Diffusion into an efficient and expressive text-to-video model with resolution up to 1280 x 2048. We show that the temporal layers trained in this way generalize to different fine-tuned text-to-image LDMs. Utilizing this property, we show the first results for personalized text-to-video generation, opening exciting directions for future content creation. Project page: https://research.nvidia.com/labs/toronto-ai/VideoLDM/

Code Repositories

ai-forever/kandinskyvideo
pytorch
Mentioned in GitHub
gongzix/neuroclips
Mentioned in GitHub
srpkdyy/VideoLDM
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
text-to-video-generation-on-msr-vttVideo LDM
CLIPSIM: 0.2929
text-to-video-generation-on-msr-vttCogVideo (Chinese)
CLIP-FID: 24.78
CLIPSIM: 0.2614
text-to-video-generation-on-ucf-101Video LDM (Zero-shot, 320x512)
FVD16: 550.61
video-generation-on-ucf-101Video LDM (320x512, text-conditional)
FVD16: 550.61
Inception Score: 33.45

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models | Papers | HyperAI