Command Palette
Search for a command to run...

Abstract
Building models that can be rapidly adapted to novel tasks using only ahandful of annotated examples is an open challenge for multimodal machinelearning research. We introduce Flamingo, a family of Visual Language Models(VLM) with this ability. We propose key architectural innovations to: (i)bridge powerful pretrained vision-only and language-only models, (ii) handlesequences of arbitrarily interleaved visual and textual data, and (iii)seamlessly ingest images or videos as inputs. Thanks to their flexibility,Flamingo models can be trained on large-scale multimodal web corpora containingarbitrarily interleaved text and images, which is key to endow them within-context few-shot learning capabilities. We perform a thorough evaluation ofour models, exploring and measuring their ability to rapidly adapt to a varietyof image and video tasks. These include open-ended tasks such as visualquestion-answering, where the model is prompted with a question which it has toanswer; captioning tasks, which evaluate the ability to describe a scene or anevent; and close-ended tasks such as multiple-choice visual question-answering.For tasks lying anywhere on this spectrum, a single Flamingo model can achievea new state of the art with few-shot learning, simply by prompting the modelwith task-specific examples. On numerous benchmarks, Flamingo outperformsmodels fine-tuned on thousands of times more task-specific data.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-recognition-on-rareact | - | mWAP: 60.8 |
| generative-visual-question-answering-on-pmc | Open-Flamingo | BLEU-1: 4.1 |
| meme-classification-on-hateful-memes | Flamingo (few-shot:32) | ROC-AUC: 0.700 |
| meme-classification-on-hateful-memes | Flamingo (fine-tuned) | ROC-AUC: 0.866 |
| temporal-casual-qa-on-next-qa | Flamingo(0-shot) | WUPS: 26.7 |
| temporal-casual-qa-on-next-qa | Flamingo(32-shot) | WUPS: 33.5 |
| video-question-answering-on-situated | Flamingo-9B (4-shot) | Average Accuracy: 42.8 |
| video-question-answering-on-situated | Flamingo-80B (0-shot) | Average Accuracy: 39.7 |
| video-question-answering-on-situated | Flamingo-9B (0-shot) | Average Accuracy: 41.8 |
| video-question-answering-on-situated | Flamingo-80B (4-shot) | Average Accuracy: 42.4 |
| visual-question-answering-on-msrvtt-qa-1 | Flamingo (32-shot) | Accuracy: 0.310 |
| visual-question-answering-on-msrvtt-qa-1 | Flamingo (0-shot) | Accuracy: 0.174 |
| visual-question-answering-on-msrvtt-qa-1 | Flamingo | Accuracy: 0.474 |
| visual-question-answering-on-ok-vqa | Flamingo3B | Accuracy: 41.2 |
| visual-question-answering-on-ok-vqa | Flamingo9B | Accuracy: 44.7 |
| visual-question-answering-on-ok-vqa | Flamingo80B | Accuracy: 50.6 |
| visual-question-answering-on-vqa-v2-test-dev | Flamingo 80B | Accuracy: 56.3 |
| visual-question-answering-on-vqa-v2-test-dev | Flamingo 3B | Accuracy: 49.2 |
| visual-question-answering-on-vqa-v2-test-dev | Flamingo 9B | Accuracy: 51.8 |
| visual-question-answering-vqa-on-pmc-vqa | Open-Flamingo | Accuracy: 26.4 |
| zero-shot-cross-modal-retrieval-on-coco-2014 | Flamingo | Image-to-text R@1: 65.9 Image-to-text R@10: 92.9 Image-to-text R@5: 87.3 Text-to-image R@1: 48.0 Text-to-image R@10: 82.1 Text-to-image R@5: 73.3 |
| zero-shot-cross-modal-retrieval-on-flickr30k | Flamingo | Image-to-text R@1: 89.3 Image-to-text R@10: 99.7 Image-to-text R@5: 98.8 Text-to-image R@1: 79.5 Text-to-image R@10: 97.9 Text-to-image R@5: 95.3 |
| zero-shot-video-question-answer-on-star | Flamingo-9B | Accuracy: 41.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.