Command Palette
Search for a command to run...
Color Me Correctly: Bridging Perceptual Color Spaces and Text Embeddings for Improved Diffusion Generation
Sung-Lin Tsai Bo-Lun Huang Yu Ting Shen Cheng Yu Yeo Chiang Tseng Bo-Kai Ruan Wen-Sheng Lien Hong-Han Shuai

Abstract
Accurate color alignment in text-to-image (T2I) generation is critical forapplications such as fashion, product visualization, and interior design, yetcurrent diffusion models struggle with nuanced and compound color terms (e.g.,Tiffany blue, lime green, hot pink), often producing images that are misalignedwith human intent. Existing approaches rely on cross-attention manipulation,reference images, or fine-tuning but fail to systematically resolve ambiguouscolor descriptions. To precisely render colors under prompt ambiguity, wepropose a training-free framework that enhances color fidelity by leveraging alarge language model (LLM) to disambiguate color-related prompts and guidingcolor blending operations directly in the text embedding space. Our methodfirst employs a large language model (LLM) to resolve ambiguous color terms inthe text prompt, and then refines the text embeddings based on the spatialrelationships of the resulting color terms in the CIELAB color space. Unlikeprior methods, our approach improves color accuracy without requiringadditional training or external reference images. Experimental resultsdemonstrate that our framework improves color alignment without compromisingimage quality, bridging the gap between text semantics and visual generation.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.