Command Palette
Search for a command to run...
Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models
Nair Nithin Gopalakrishnan ; Bandara Wele Gedara Chaminda ; Patel Vishal M.

Abstract
Generating photos satisfying multiple constraints find broad utility in thecontent creation industry. A key hurdle to accomplishing this task is the needfor paired data consisting of all modalities (i.e., constraints) and theircorresponding output. Moreover, existing methods need retraining using paireddata across all modalities to introduce a new condition. This paper proposes asolution to this problem based on denoising diffusion probabilistic models(DDPMs). Our motivation for choosing diffusion models over other generativemodels comes from the flexible internal structure of diffusion models. Sinceeach sampling step in the DDPM follows a Gaussian distribution, we show thatthere exists a closed-form solution for generating an image given variousconstraints. Our method can unite multiple diffusion models trained on multiplesub-tasks and conquer the combined task through our proposed sampling strategy.We also introduce a novel reliability parameter that allows using differentoff-the-shelf diffusion models trained across various datasets during samplingtime alone to guide it to the desired outcome satisfying multiple constraints.We perform experiments on various standard multimodal tasks to demonstrate theeffectiveness of our approach. More details can be found inhttps://nithin-gk.github.io/projectpages/Multidiff/index.html
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| face-sketch-synthesis-on-multi-modal-celeba | Diffusion | FID: 26.09 |
| multimodal-generation-on-multi-modal-celeba | Diffusion | FID: 26.09 |
| text-to-image-generation-on-multi-modal | Unite and Conquer | FID: 26.09 LPIPS: 0.519 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.