HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

VILA: On Pre-training for Visual Language Models

Ji Lin Hongxu Yin Wei Ping Yao Lu Pavlo Molchanov Andrew Tao Huizi Mao Jan Kautz Mohammad Shoeybi Song Han

VILA: On Pre-training for Visual Language Models

Abstract

Visual language models (VLMs) rapidly progressed with the recent success of large language models. There have been growing efforts on visual instruction tuning to extend the LLM with visual inputs, but lacks an in-depth study of the visual language pre-training process, where the model learns to perform joint modeling on both modalities. In this work, we examine the design options for VLM pre-training by augmenting LLM towards VLM through step-by-step controllable comparisons. We introduce three main findings: (1) freezing LLMs during pre-training can achieve decent zero-shot performance, but lack in-context learning capability, which requires unfreezing the LLM; (2) interleaved pre-training data is beneficial whereas image-text pairs alone are not optimal; (3) re-blending text-only instruction data to image-text data during instruction fine-tuning not only remedies the degradation of text-only tasks, but also boosts VLM task accuracy. With an enhanced pre-training recipe we build VILA, a Visual Language model family that consistently outperforms the state-of-the-art models, e.g., LLaVA-1.5, across main benchmarks without bells and whistles. Multi-modal pre-training also helps unveil appealing properties of VILA, including multi-image reasoning, enhanced in-context learning, and better world knowledge.

Code Repositories

efficient-large-model/vila
Official
pytorch
Mentioned in GitHub
nvlabs/vila
Official
pytorch
Mentioned in GitHub
mit-han-lab/llm-awq
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-mm-vetVILA-13B
GPT-4 score: 45.7
zero-shot-video-question-answer-on-video-mmeVILA-1.5 (34B)
Accuracy (%): 61.4
zero-shot-video-question-answer-on-video-mme-1VILA-1.5 (34B)
Accuracy (%): 64.1
zeroshot-video-question-answer-on-msvd-qaVILA1.5-40B
Accuracy: 80.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
VILA: On Pre-training for Visual Language Models | Papers | HyperAI