HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Chunyuan Li; Haotian Liu; Liunian Harold Li; Pengchuan Zhang; Jyoti Aneja; Jianwei Yang; Ping Jin; Houdong Hu; Zicheng Liu; Yong Jae Lee; Jianfeng Gao

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Abstract

Learning visual representations from natural language supervision has recently shown great promise in a number of pioneering works. In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets and tasks. However, it remains challenging to evaluate the transferablity of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. To tackle this, we build ELEVATER (Evaluation of Language-augmented Visual Task-level Transfer), the first benchmark and toolkit for evaluating(pre-trained) language-augmented visual models. ELEVATER is composed of three components. (i) Datasets. As downstream evaluation suites, it consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge. (ii) Toolkit. An automatic hyper-parameter tuning toolkit is developed to facilitate model evaluation on downstream tasks. (iii) Metrics. A variety of evaluation metrics are used to measure sample-efficiency (zero-shot and few-shot) and parameter-efficiency (linear probing and full model fine-tuning). ELEVATER is a platform for Computer Vision in the Wild (CVinW), and is publicly released at at https://computer-vision-in-the-wild.github.io/ELEVATER/

Code Repositories

sincerass/mvlpt
pytorch
Mentioned in GitHub
microsoft/unicl
pytorch
Mentioned in GitHub
Computer-Vision-in-the-Wild/Elevater_Toolkit_IC
Official
pytorch
Mentioned in GitHub
microsoft/esvit
pytorch
Mentioned in GitHub
microsoft/klite
pytorch
Mentioned in GitHub
microsoft/GLIP
pytorch
Mentioned in GitHub
rsCPSyEu/ovd_cod
pytorch
Mentioned in GitHub
eric-ai-lab/pevit
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
object-detection-on-elevaterGLIP-T
AP: 62.6
object-detection-on-odinw-full-shot-35-tasksGLIP-T
AP: 62.6
zero-shot-image-classification-on-icinwCLIP (ViT B-32)
Average Score: 56.64
zero-shot-image-classification-on-odinwGLIP (Tiny A)
Average Score: 11.4
zero-shot-object-detection-on-odinwGLIP (Tiny A)
Average Score: 11.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models | Papers | HyperAI