HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling

Yubin Wang Xinyang Jiang De Cheng Wenli Sun Dongsheng Li Cairong Zhao

HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling

Abstract

Prompt learning has become a prevalent strategy for adapting vision-language foundation models (VLMs) such as CLIP to downstream tasks. With the emergence of large language models (LLMs), recent studies have explored the potential of using category-related descriptions to enhance prompt effectiveness. However, conventional descriptions lack explicit structured information necessary to represent the interconnections among key elements like entities or attributes with relation to a particular category. Since existing prompt tuning methods give little consideration to managing structured knowledge, this paper advocates leveraging LLMs to construct a graph for each description to prioritize such structured knowledge. Consequently, we propose a novel approach called Hierarchical Prompt Tuning (HPT), enabling simultaneous modeling of both structured and conventional linguistic knowledge. Specifically, we introduce a relationship-guided attention module to capture pair-wise associations among entities and attributes for low-level prompt learning. In addition, by incorporating high-level and global-level prompts modeling overall semantics, the proposed hierarchical structure forges cross-level interlinks and empowers the model to handle more complex and long-term relationships. Finally, by enhancing multi-granularity knowledge generation, redesigning the relationship-driven attention re-weighting module, and incorporating consistent constraints on the hierarchical text encoder, we propose HPT++, which further improves the performance of HPT. Our experiments are conducted across a wide range of evaluation settings, including base-to-new generalization, cross-dataset evaluation, and domain generalization. Extensive results and ablation studies demonstrate the effectiveness of our methods, which consistently outperform existing SOTA methods.

Code Repositories

vill-lab/2024-aaai-hpt
pytorch
Mentioned in GitHub
ThomasWangY/2024-AAAI-HPT
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
prompt-engineering-on-caltech-101HPT++
Harmonic mean: 96.96
prompt-engineering-on-dtdHPT++
Harmonic mean: 74.23
prompt-engineering-on-eurosatHPT++
Harmonic mean: 87.36
prompt-engineering-on-fgvc-aircraftHPT++
Harmonic mean: 41.33
prompt-engineering-on-food-101HPT++
Harmonic mean: 91.09
prompt-engineering-on-imagenetHPT++
Harmonic mean: 74.24
prompt-engineering-on-imagenet-aHPT++
Top-1 accuracy %: 51.18
prompt-engineering-on-imagenet-rHPT++
Top-1 accuracy %: 77.52
prompt-engineering-on-imagenet-sHPT++
Top-1 accuracy %: 49.28
prompt-engineering-on-imagenet-v2HPT++
Top-1 accuracy %: 65.31
prompt-engineering-on-oxford-102-flowerHPT++
Harmonic mean: 85.85
prompt-engineering-on-oxford-iiit-pet-datasetHPT++
Harmonic mean: 96.91
prompt-engineering-on-stanford-cars-1HPT++
Harmonic mean: 75.59
prompt-engineering-on-sun397HPT++
Harmonic mean: 81.11
prompt-engineering-on-ucf101HPT++
Harmonic mean: 83.81

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling | Papers | HyperAI