HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Fashion-VDM: Video Diffusion Model for Virtual Try-On

Johanna Karras Yingwei Li Nan Liu Luyang Zhu Innfarn Yoo Andreas Lugmayr Chris Lee Ira Kemelmacher-Shlizerman

Fashion-VDM: Video Diffusion Model for Virtual Try-On

Abstract

We present Fashion-VDM, a video diffusion model (VDM) for generating virtualtry-on videos. Given an input garment image and person video, our method aimsto generate a high-quality try-on video of the person wearing the givengarment, while preserving the person's identity and motion. Image-based virtualtry-on has shown impressive results; however, existing video virtual try-on(VVT) methods are still lacking garment details and temporal consistency. Toaddress these issues, we propose a diffusion-based architecture for videovirtual try-on, split classifier-free guidance for increased control over theconditioning inputs, and a progressive temporal training strategy forsingle-pass 64-frame, 512px video generation. We also demonstrate theeffectiveness of joint image-video training for video try-on, especially whenvideo data is limited. Our qualitative and quantitative experiments show thatour approach sets the new state-of-the-art for video virtual try-on. Foradditional results, visit our project page:https://johannakarras.github.io/Fashion-VDM.

Benchmarks

BenchmarkMethodologyMetrics
virtual-try-on-on-ubc-fashion-videosFashion-VDM
FVD: 172

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Fashion-VDM: Video Diffusion Model for Virtual Try-On | Papers | HyperAI