HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ShareGPT4V: Improving Large Multi-Modal Models with Better Captions

Lin Chen; Jinsong Li; Xiaoyi Dong; Pan Zhang; Conghui He; Jiaqi Wang; Feng Zhao; Dahua Lin

ShareGPT4V: Improving Large Multi-Modal Models with Better Captions

Abstract

In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community.

Benchmarks

BenchmarkMethodologyMetrics
visual-instruction-following-on-llava-benchShareGPT4V-13B
avg score: 79.9
visual-instruction-following-on-llava-benchShareGPT4V-7B
avg score: 72.6
visual-question-answering-on-mm-vetShareGPT4V-13B
GPT-4 score: 43.1
Params: 13B
visual-question-answering-on-mm-vetShareGPT4V-7B
GPT-4 score: 37.6
Params: 7B

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ShareGPT4V: Improving Large Multi-Modal Models with Better Captions | Papers | HyperAI