HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Confidence Regularized Self-Training

Yang Zou; Zhiding Yu; Xiaofeng Liu; B. V. K. Vijaya Kumar; Jinsong Wang

Confidence Regularized Self-Training

Abstract

Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.

Code Repositories

yzou2/CBST
mxnet
Mentioned in GitHub
yzou2/CRST
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
domain-adaptation-on-office-31MRKLD + LRENT
Average Accuracy: 86.8
domain-adaptation-on-visda2017CRST
Accuracy: 78.1
domain-adaptation-on-visda2017MRKLD + LRENT
Accuracy: 78.1
image-to-image-translation-on-synthia-toLRENT (DeepLabv2)
mIoU (13 classes): 48.7
semantic-segmentation-on-densepassCRST
mIoU: 31.67%
synthetic-to-real-translation-on-gtav-toCRST(MRKLD-SP-MST)
mIoU: 49.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Confidence Regularized Self-Training | Papers | HyperAI