HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval

Yuan Yuan Yang Zhan Zhitong Xiong

Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval

Abstract

Vision-and-language pre-training (VLP) models have experienced a surge in popularity recently. By fine-tuning them on specific datasets, significant performance improvements have been observed in various tasks. However, full fine-tuning of VLP models not only consumes a significant amount of computational resources but also has a significant environmental impact. Moreover, as remote sensing (RS) data is constantly being updated, full fine-tuning may not be practical for real-world applications. To address this issue, in this work, we investigate the parameter-efficient transfer learning (PETL) method to effectively and efficiently transfer visual-language knowledge from the natural domain to the RS domain on the image-text retrieval task. To this end, we make the following contributions. 1) We construct a novel and sophisticated PETL framework for the RS image-text retrieval (RSITR) task, which includes the pretrained CLIP model, a multimodal remote sensing adapter, and a hybrid multi-modal contrastive (HMMC) learning objective; 2) To deal with the problem of high intra-modal similarity in RS data, we design a simple yet effective HMMC loss; 3) We provide comprehensive empirical studies for PETL-based RS image-text retrieval. Our results demonstrate that the proposed method is promising and of great potential for practical applications. 4) We benchmark extensive state-of-the-art PETL methods on the RSITR task. Our proposed model only contains 0.16M training parameters, which can achieve a parameter reduction of 98.9% compared to full fine-tuning, resulting in substantial savings in training costs. Our retrieval performance exceeds traditional methods by 7-13% and achieves comparable or better performance than full fine-tuning. This work can provide new ideas and useful insights for RS vision-language tasks.

Code Repositories

ZhanYang-nwpu/PE-RSITR
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
cross-modal-retrieval-on-rsicdPE-RSITR (MRS-Adapter)
Image-to-text R@1: 14.13%
Mean Recall: 31.12%
text-to-image R@1: 11.63%
cross-modal-retrieval-on-rsitmdPE-RSITR (MRS-Adapter)
Image-to-text R@1: 23.67%
Mean Recall: 44.47%
text-to-imageR@1: 20.10%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval | Papers | HyperAI