LiYanwei ; ZhangYuechen ; WangChengyao ; ZhongZhisheng ; ChenYixin ; ChuRuihang ; LiuShaoteng ; JiaJiaya

摘要
在这项工作中,我们介绍了Mini-Gemini,一个简单而有效的框架,用于增强多模态视觉语言模型(VLMs)。尽管VLMs在促进基本的视觉对话和推理方面取得了进展,但与GPT-4和Gemini等高级模型相比,性能差距仍然存在。我们尝试通过挖掘VLMs的潜力来缩小这一差距,从三个方面提升其性能和任意到任意的工作流程,即高分辨率视觉标记、高质量数据和VLM引导的生成。为了增强视觉标记,我们提出使用额外的视觉编码器进行高分辨率细化,而不增加视觉标记的数量。此外,我们构建了一个高质量的数据集,以促进精确的图像理解和基于推理的生成,扩展了当前VLMs的操作范围。总体而言,Mini-Gemini进一步挖掘了VLMs的潜力,并同时赋予现有框架图像理解、推理和生成的能力。Mini-Gemini支持一系列从20亿到340亿参数的密集型和混合专家(MoE)大语言模型(LLMs)。实验表明,它在多个零样本基准测试中表现出色,甚至超过了某些已开发的私有模型。代码和模型可在https://github.com/dvlab-research/MiniGemini获取。
代码仓库
dvlab-research/MGM
pytorch
GitHub 中提及
dvlab-research/minigemini
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| image-classification-on-coloninst-v1-seen | MGM-2B (w/o LoRA, w/ extra data) | Accuray: 93.24 |
| image-classification-on-coloninst-v1-seen | MGM-2B (w/o LoRA, w/o extra data) | Accuray: 92.97 |
| image-classification-on-coloninst-v1-unseen | MGM-2B (w/o LoRA, w/ extra data) | Accuray: 78.69 |
| image-classification-on-coloninst-v1-unseen | MGM-2B (w/o LoRA, w/o extra data) | Accuray: 78.99 |
| referring-expression-generation-on-coloninst | MGM-2B (w/o LoRA, w/ extra data) | Accuray: 98.75 |
| referring-expression-generation-on-coloninst | MGM-2B (w/o LoRA, w/o extra data) | Accuray: 98.17 |
| referring-expression-generation-on-coloninst-1 | MGM-2B (w/o LoRA, w/ extra data) | Accuray: 74.30 |
| referring-expression-generation-on-coloninst-1 | MGM-2B (w/o LoRA, w/o extra data) | Accuray: 69.81 |
| visual-question-answering-on-mm-vet | Mini-Gemini | GPT-4 score: 53.0 |
| visual-question-answering-on-mm-vet | Mini-Gemini-HD-BS | GPT-4 score: 60.8 |
| visual-question-answering-on-mm-vet | Mini-Gemini-HD | GPT-4 score: 59.3 |