
摘要
我们介绍了MobileVLM,这是一种专为移动设备设计的高效多模态视觉语言模型(MMVLM)。该模型融合了多种面向移动设备的架构设计和技术,包括从零开始训练的14亿和27亿参数规模的语言模型、以CLIP方式预训练的多模态视觉模型以及通过高效的投影器实现跨模态交互。我们在多个典型的视觉语言模型基准上对MobileVLM进行了评估,结果表明其性能与一些更大规模的模型相当。更重要的是,我们在高通骁龙888 CPU和英伟达Jetson Orin GPU上测量了推理速度,分别达到了每秒21.5个和65.3个标记的最先进水平。我们的代码将在以下地址公开:https://github.com/Meituan-AutoML/MobileVLM。
代码仓库
meituan-automl/mobilevlm
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| image-classification-on-coloninst-v1-seen | MobileVLM-1.7B (w/o LoRA, w/ extra data) | Accuray: 93.02 |
| image-classification-on-coloninst-v1-seen | MobileVLM-1.7B (w/ LoRA, w/ extra data) | Accuray: 93.64 |
| image-classification-on-coloninst-v1-unseen | MobileVLM-1.7B (w/o LoRA, w/ extra data) | Accuray: 78.75 |
| image-classification-on-coloninst-v1-unseen | MobileVLM-1.7B (w/ LoRA, w/ extra data) | Accuray: 80.44 |
| referring-expression-generation-on-coloninst | MobileVLM-1.7B (w/o LoRA, w/ extra data) | Accuray: 97.78 |
| referring-expression-generation-on-coloninst | MobileVLM-1.7B (w/ LoRA, w/ extra data) | Accuray: 97.87 |
| referring-expression-generation-on-coloninst-1 | MobileVLM-1.7B (w/o LoRA, w/ extra data) | Accuray: 73.14 |
| referring-expression-generation-on-coloninst-1 | MobileVLM-1.7B (w/ LoRA, w/ extra data) | Accuray: 78.03 |