HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding

Peng Jin; Ryuichi Takanobu; Wancai Zhang; Xiaochun Cao; Li Yuan

Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding

Abstract

Large language models have demonstrated impressive universal capabilities across a wide range of open-ended tasks and have extended their utility to encompass multimodal conversations. However, existing methods encounter challenges in effectively handling both image and video understanding, particularly with limited visual tokens. In this work, we introduce Chat-UniVi, a Unified Vision-language model capable of comprehending and engaging in conversations involving images and videos through a unified visual representation. Specifically, we employ a set of dynamic visual tokens to uniformly represent images and videos. This representation framework empowers the model to efficiently utilize a limited number of visual tokens to simultaneously capture the spatial details necessary for images and the comprehensive temporal relationship required for videos. Moreover, we leverage a multi-scale representation, enabling the model to perceive both high-level semantic concepts and low-level visual details. Notably, Chat-UniVi is trained on a mixed dataset containing both images and videos, allowing direct application to tasks involving both mediums without requiring any modifications. Extensive experimental results demonstrate that Chat-UniVi consistently outperforms even existing methods exclusively designed for either images or videos. Code is available at https://github.com/PKU-YuanGroup/Chat-UniVi.

Code Repositories

pku-yuangroup/video-bench
Mentioned in GitHub
pku-yuangroup/chat-univi
Official
pytorch
Mentioned in GitHub
skyworkai/moh
pytorch
Mentioned in GitHub
skyworkai/moe-plus-plus
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
science-question-answering-on-scienceqaChat-UniVi-13B
Avg. Accuracy: 90.99
Grades 1-6: 91.19
Grades 7-12: 90.64
Image Context: 88.05
Language Science: 88.91
Natural Science: 90.41
No Context: 90.94
Social Science: 95.05
Text Context: 89.64
vcgbench-diverse-on-videoinstructChat-UniVi
Consistency: 2.36
Contextual Understanding: 2.66
Correctness of Information: 2.29
Dense Captioning: 1.33
Detail Orientation: 2.56
Reasoning: 3.59
Spatial Understanding: 2.36
Temporal Understanding: 1.56
mean: 2.29
video-based-generative-performanceChat-UniVi
Consistency: 2.81
Contextual Understanding: 3.46
Correctness of Information: 2.89
Detail Orientation: 2.91
Temporal Understanding: 2.39
mean: 2.99
video-based-generative-performance-1Chat-UniVi
gpt-score: 2.89
video-based-generative-performance-2Chat-UniVi
gpt-score: 2.81
video-based-generative-performance-3Chat-UniVi
gpt-score: 3.46
video-based-generative-performance-4Chat-UniVi
gpt-score: 2.91
video-based-generative-performance-5Chat-UniVi
gpt-score: 2.39
video-question-answering-on-activitynet-qaChat-UniVi-13B
Accuracy: 46.4
Confidence score: 3.3
zeroshot-video-question-answer-on-activitynetChat-UniVi
Accuracy: 46.1
Confidence Score: 3.3
zeroshot-video-question-answer-on-activitynetChat-UniVi-13B
Accuracy: 46.4
Confidence Score: 3.6
zeroshot-video-question-answer-on-msrvtt-qaChat-UniVi-7B
Accuracy: 55.0
Confidence Score: 3.1
zeroshot-video-question-answer-on-msvd-qaChat-UniVi-7B
Accuracy: 69.3
Confidence Score: 3.7
zeroshot-video-question-answer-on-tgif-qaChat-UniVi-7B
Accuracy: 69.0
Confidence Score: 3.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding | Papers | HyperAI