Command Palette
Search for a command to run...
Yi-Hsuan Tsai; Wei-Chih Hung; Samuel Schulter; Kihyuk Sohn; Ming-Hsuan Yang; Manmohan Chandraker

Abstract
Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-adaptation-on-synscapes-to-cityscapes | AdaptSegNet | mIoU: 52.7 |
| image-to-image-translation-on-synthia-to | Multi-level Adaptation | mIoU (13 classes): 46.7 |
| image-to-image-translation-on-synthia-to | Single-level Adaptation | mIoU (13 classes): 45.9 |
| synthetic-to-real-translation-on-gtav-to | AdaptSegNet(multi-level) | mIoU: 42.4 |
| synthetic-to-real-translation-on-synthia-to-1 | AdaptSegNet(Multi-level) | MIoU (13 classes): 46.7 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.