HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Domain Generalization using Pretrained Models without Fine-tuning

Ziyue Li Kan Ren Xinyang Jiang Bo Li Haipeng Zhang Dongsheng Li

Domain Generalization using Pretrained Models without Fine-tuning

Abstract

Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks. However, fine-tuning is usually computationally expensive due to the ever-growing size of pretrained models. More importantly, it may cause over-fitting on source domain and compromise their generalization ability as shown in recent works. Generally, pretrained models possess some level of generalization ability and can achieve decent performance regarding specific domains and samples. However, the generalization performance of pretrained models could vary significantly over different test domains even samples, which raises challenges for us to best leverage pretrained models in DG tasks. In this paper, we propose a novel domain generalization paradigm to better leverage various pretrained models, named specialized ensemble learning for domain generalization (SEDGE). It first trains a linear label space adapter upon fixed pretrained models, which transforms the outputs of the pretrained model to the label space of the target domain. Then, an ensemble network aware of model specialty is proposed to dynamically dispatch proper pretrained models to predict each test sample. Experimental studies on several benchmarks show that SEDGE achieves significant performance improvements comparing to strong baselines including state-of-the-art method in DG tasks and reduces the trainable parameters by ~99% and the training time by ~99.5%.

Benchmarks

BenchmarkMethodologyMetrics
domain-generalization-on-domainnetSEDGE+
Average Accuracy: 54.7
domain-generalization-on-domainnetSEDGE
Average Accuracy: 46.3
domain-generalization-on-office-homeSEDGE
Average Accuracy: 79.9
domain-generalization-on-office-homeSEDGE+
Average Accuracy: 80.7
domain-generalization-on-pacs-2SEDGE+
Average Accuracy: 96.1
domain-generalization-on-pacs-2SEDGE
Average Accuracy: 84.1
domain-generalization-on-terraincognitaSEDGE+
Average Accuracy: 56.8
domain-generalization-on-terraincognitaSEDGE
Average Accuracy: 56.8
domain-generalization-on-vlcsSEDGE+
Average Accuracy: 82.2
domain-generalization-on-vlcsSEDGE
Average Accuracy: 79.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Domain Generalization using Pretrained Models without Fine-tuning | Papers | HyperAI