Command Palette
Search for a command to run...
Empowering Source-Free Domain Adaptation via MLLM-Guided Reliability-Based Curriculum Learning
Dongjie Chen Kartik Patwari Zhengfeng Lai Xiaoguang Zhu Sen-ching Cheung Chen-Nee Chuah

Abstract
Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to a target domain using only unlabeled target data. Current SFDA methods face challenges in effectively leveraging pre-trained knowledge and exploiting target domain data. Multimodal Large Language Models (MLLMs) offer remarkable capabilities in understanding visual and textual information, but their applicability to SFDA poses challenges such as instruction-following failures, intensive computational demands, and difficulties in performance measurement prior to adaptation. To alleviate these issues, we propose $\textbf{Reliability-based Curriculum Learning (RCL)}$, a novel framework that integrates multiple MLLMs for knowledge exploitation via pseudo-labeling in SFDA. Our framework incorporates Reliable Knowledge Transfer, Self-correcting and MLLM-guided Knowledge Expansion, and Multi-hot Masking Refinement to progressively exploit unlabeled data in the target domain. RCL achieves state-of-the-art (SOTA) performance on multiple SFDA benchmarks, e.g., $\textbf{+9.4%}$ on DomainNet, demonstrating its effectiveness in enhancing adaptability and robustness without requiring access to source data. Our code is available at: https://github.com/Dong-Jie-Chen/RCL.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-adaptation-on-office-home | RCL | Accuracy: 90.0 |
| domain-adaptation-on-visda2017 | RCL | Accuracy: 93.2 |
| source-free-domain-adaptation-on-visda-2017 | RCL | Accuracy: 93.2 |
| unsupervised-domain-adaptation-on-office-home | RCL | Accuracy: 90.0 |
| unsupervised-domain-adaptation-on-visda2017 | RCL | Accuracy: 93.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.