HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

CogView: Mastering Text-to-Image Generation via Transformers

Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang; Jie Tang

CogView: Mastering Text-to-Image Generation via Transformers

Abstract

Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.

Code Repositories

thudm/cogview2
pytorch
Mentioned in GitHub
JunnYu/x-transformers-paddle
jax
Mentioned in GitHub
thudm/visualglm-6b
pytorch
Mentioned in GitHub
THUDM/CogView
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
text-to-image-generation-on-cocoCogView
FID: 27.1
FID-1: 19.4
FID-2: 13.9
FID-4: 19.4
FID-8: 23.6
Inception score: 18.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
CogView: Mastering Text-to-Image Generation via Transformers | Papers | HyperAI