Command Palette
Search for a command to run...
Yang Zou; Zhiding Yu; Xiaofeng Liu; B. V. K. Vijaya Kumar; Jinsong Wang

Abstract
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-adaptation-on-office-31 | MRKLD + LRENT | Average Accuracy: 86.8 |
| domain-adaptation-on-visda2017 | CRST | Accuracy: 78.1 |
| domain-adaptation-on-visda2017 | MRKLD + LRENT | Accuracy: 78.1 |
| image-to-image-translation-on-synthia-to | LRENT (DeepLabv2) | mIoU (13 classes): 48.7 |
| semantic-segmentation-on-densepass | CRST | mIoU: 31.67% |
| synthetic-to-real-translation-on-gtav-to | CRST(MRKLD-SP-MST) | mIoU: 49.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.