Command Palette
Search for a command to run...
Min-Hung Chen; Zsolt Kira; Ghassan AlRegib; Jaekwon Yoo; Ruxin Chen; Jian Zheng

Abstract
Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9% accuracy gain over "Source only" from 73.9% to 81.8% on "HMDB --> UCF", and 10.3% gain on "Kinetics --> Gameplay"). The code and data are released at http://github.com/cmhungsteve/TA3N.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-adaptation-on-hmdb-ucf-full | TA3N | Accuracy: 81.79 |
| domain-adaptation-on-ucf-hmdb-full | TA3N | Accuracy: 78.33 |
| unsupervised-domain-adaptation-on-epic | TA3N | Average Accuracy: 39.9 |
| unsupervised-domain-adaptation-on-hmdb-ucf | TA3N | Accuracy: 90.54 |
| unsupervised-domain-adaptation-on-jester-1 | TA3N | Accuracy: 55.5 |
| unsupervised-domain-adaptation-on-ucf-hmdb | TA3N | Accuracy: 81.38 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.