HyperAIHyperAI

Command Palette

Search for a command to run...

GLIGEN: Open-Set Grounded Text-to-Image Generation

Yuheng Li¹§, Haotian Liu¹§, Qingyang Wu², Fangzhou Mu¹, Jianwei Yang³, Jianfeng Gao³, Chunyuan Li³¶, Yong Jae Lee¹¶

Abstract

Large-scale text-to-image diffusion models have made amazing advances.However, the status quo is to use text input alone, which can impedecontrollability. In this work, we propose GLIGEN, Grounded-Language-to-ImageGeneration, a novel approach that builds upon and extends the functionality ofexisting pre-trained text-to-image diffusion models by enabling them to also beconditioned on grounding inputs. To preserve the vast concept knowledge of thepre-trained model, we freeze all of its weights and inject the groundinginformation into new trainable layers via a gated mechanism. Our model achievesopen-world grounded text2img generation with caption and bounding box conditioninputs, and the grounding ability generalizes well to novel spatialconfigurations and concepts. GLIGEN's zero-shot performance on COCO and LVISoutperforms that of existing supervised layout-to-image baselines by a largemargin.


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
GLIGEN: Open-Set Grounded Text-to-Image Generation | Papers | HyperAI