HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Improving Visual Prompt Tuning for Self-supervised Vision Transformers

Seungryong Yoo Eunji Kim Dahuin Jung Jungbeom Lee Sungroh Yoon

Improving Visual Prompt Tuning for Self-supervised Vision Transformers

Abstract

Visual Prompt Tuning (VPT) is an effective tuning method for adapting pretrained Vision Transformers (ViTs) to downstream tasks. It leverages extra learnable tokens, known as prompts, which steer the frozen pretrained ViTs. Although VPT has demonstrated its applicability with supervised vision transformers, it often underperforms with self-supervised ones. Through empirical observations, we deduce that the effectiveness of VPT hinges largely on the ViT blocks with which the prompt tokens interact. Specifically, VPT shows improved performance on image classification tasks for MAE and MoCo v3 when the prompt tokens are inserted into later blocks rather than the first block. These observations suggest that there exists an optimal location of blocks for the insertion of prompt tokens. Unfortunately, identifying the optimal blocks for prompts within each self-supervised ViT for diverse future scenarios is a costly process. To mitigate this problem, we propose a simple yet effective method that learns a gate for each ViT block to adjust its intervention into the prompt tokens. With our method, prompt tokens are selectively influenced by blocks that require steering for task adaptation. Our method outperforms VPT variants in FGVC and VTAB image classification and ADE20K semantic segmentation. The code is available at https://github.com/ryongithub/GatedPromptTuning.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
visual-prompt-tuning-on-fgvcGateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Mean Accuracy: 73.39
visual-prompt-tuning-on-fgvcGateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Mean Accuracy: 83.00
visual-prompt-tuning-on-vtab-1k-natural-7GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Mean Accuracy: 74.84
visual-prompt-tuning-on-vtab-1k-natural-7GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Mean Accuracy: 47.61
visual-prompt-tuning-on-vtab-1k-specialized-4GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Mean Accuracy: 83.38
visual-prompt-tuning-on-vtab-1k-specialized-4GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Mean Accuracy: 76.86
visual-prompt-tuning-on-vtab-1k-structured-8GateVPT(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K)
Mean Accuracy: 49.10
visual-prompt-tuning-on-vtab-1k-structured-8GateVPT(ViT-B/16_MAE_pretrained_ImageNet-1K)
Mean Accuracy: 36.80

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Improving Visual Prompt Tuning for Self-supervised Vision Transformers | Papers | HyperAI