HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Emu: Generative Pretraining in Multimodality

Quan Sun; Qiying Yu; Yufeng Cui; Fan Zhang; Xiaosong Zhang; Yueze Wang; Hongcheng Gao; Jingjing Liu; Tiejun Huang; Xinlong Wang

Emu: Generative Pretraining in Multimodality

Abstract

We present Emu, a Transformer-based multimodal foundation model, which can seamlessly generate images and texts in multimodal context. This omnivore model can take in any single-modality or multimodal data input indiscriminately (e.g., interleaved image, text and video) through a one-model-for-all autoregressive training process. First, visual signals are encoded into embeddings, and together with text tokens form an interleaved input sequence. Emu is then end-to-end trained with a unified objective of classifying the next text token or regressing the next visual embedding in the multimodal sequence. This versatile multimodality empowers the exploration of diverse pretraining data sources at scale, such as videos with interleaved frames and text, webpages with interleaved images and text, as well as web-scale image-text pairs and video-text pairs. Emu can serve as a generalist multimodal interface for both image-to-text and text-to-image tasks, and supports in-context image and text generation. Across a broad range of zero-shot/few-shot tasks including image captioning, visual question answering, video question answering and text-to-image generation, Emu demonstrates superb performance compared to state-of-the-art large multimodal models. Extended capabilities such as multimodal assistants via instruction tuning are also demonstrated with impressive performance.

Code Repositories

doc-doc/NExT-OE
pytorch
Mentioned in GitHub
baaivision/emu
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
temporal-casual-qa-on-next-qaEmu(0-shot)
WUPS: 23.4
visual-question-answering-on-mm-vetEmu-14B
GPT-4 score: 36.3±0.3
Params: 14B
visual-question-answering-on-mm-vet-w-oEmu-14B
GPT-4 score: 36.3±0.3
visual-question-answering-on-vizwiz-1Emu-I *
Accuracy: 38.1
visual-question-answering-on-vqa-v2-1Emu-I *
Accuracy: 57.5
visual-question-answering-vqa-on-core-mmEmu
Abductive: 36.57
Analogical: 18.19
Deductive: 28.9
Overall score: 28.24
Params: 14B

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp