HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Revisiting Classifier: Transferring Vision-Language Models for Video Recognition

Wenhao Wu Zhun Sun Wanli Ouyang

Revisiting Classifier: Transferring Vision-Language Models for Video Recognition

Abstract

Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source vision-language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for video classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the different knowledge from pre-trained model. We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. Our simple yet effective tuning paradigm achieves state-of-the-art performance and efficient training on various video recognition scenarios, i.e., zero-shot, few-shot, general recognition. In particular, our paradigm achieves the state-of-the-art accuracy of 87.8% on Kinetics-400, and also surpasses previous methods by 20~50% absolute top-1 accuracy under zero-shot, few-shot settings on five popular video datasets. Code and models can be found at https://github.com/whwu95/Text4Vis .

Code Repositories

whwu95/Cap4Video
pytorch
Mentioned in GitHub
whwu95/text4vis
Official
pytorch
Mentioned in GitHub
whwu95/BIKE
pytorch
Mentioned in GitHub
whwu95/GPT4Vis
Mentioned in GitHub
whwu95/ATM
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
action-classification-on-kinetics-400Text4Vis (CLIP ViT-L/14)
Acc@1: 87.8
Acc@5: 97.6
action-recognition-in-videos-on-activitynetText4Vis (w/ ViT-L)
mAP: 96.9
action-recognition-in-videos-on-ucf101Text4Vis
3-fold Accuracy: 98.2
zero-shot-action-recognition-on-activitynetText4Vis
Top-1 Accuracy: 84.6
zero-shot-action-recognition-on-hmdb51Text4Vis
Top-1 Accuracy: 58.4
zero-shot-action-recognition-on-kineticsText4Vis
Top-1 Accuracy: 68.9
Top-5 Accuracy: 90.3
zero-shot-action-recognition-on-ucf101Text4Vis
Top-1 Accuracy: 85.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Revisiting Classifier: Transferring Vision-Language Models for Video Recognition | Papers | HyperAI