HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Deep Domain Confusion: Maximizing for Domain Invariance

Eric Tzeng; Judy Hoffman; Ning Zhang; Kate Saenko; Trevor Darrell

Deep Domain Confusion: Maximizing for Domain Invariance

Abstract

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
domain-adaptation-on-office-caltechDDC[[Tzeng et al.2014]]
Average Accuracy: 88.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Deep Domain Confusion: Maximizing for Domain Invariance | Papers | HyperAI