Command Palette
Search for a command to run...
Domain Generalization Using Large Pretrained Models with Mixture-of-Adapters
Gyuseong Lee Wooseok Jang Jinhyeon Kim Jaewoo Jung Seungryong Kim

Abstract
Learning robust vision models that perform well in out-of-distribution (OOD) situations is an important task for model deployment in real-world settings. Despite extensive research in this field, many proposed methods have only shown minor performance improvements compared to the simplest empirical risk minimization (ERM) approach, which was evaluated on a benchmark with a limited hyperparameter search space. Our focus in this study is on leveraging the knowledge of large pretrained models to improve handling of OOD scenarios and tackle domain generalization problems. However, prior research has revealed that naively fine-tuning a large pretrained model can impair OOD robustness. Thus, we employ parameter-efficient fine-tuning (PEFT) techniques to effectively preserve OOD robustness while working with large models. Our extensive experiments and analysis confirm that the most effective approaches involve ensembling diverse models and increasing the scale of pretraining. As a result, we achieve state-of-the-art performance in domain generalization tasks. Our code and project page are available at: https://cvlab-kaist.github.io/MoA
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-domainnet | MoA (OpenCLIP, ViT-B/16) | Average Accuracy: 62.7 |
| domain-generalization-on-office-home | MoA (OpenCLIP, ViT-B/16) | Average Accuracy: 90.6 |
| domain-generalization-on-pacs-2 | MoA (OpenCLIP, ViT-B/16) | Average Accuracy: 97.4 |
| domain-generalization-on-terraincognita | MoA (OpenCLIP, ViT-B/16) | Average Accuracy: 52.8 |
| domain-generalization-on-vlcs | MoA (OpenCLIP, ViT-B/16) | Average Accuracy: 83.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.