Command Palette
Search for a command to run...
Andrei A. Rusu; Neil C. Rabinowitz; Guillaume Desjardins; Hubert Soyer; James Kirkpatrick; Koray Kavukcuoglu; Razvan Pascanu; Raia Hadsell

Abstract
Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| continual-learning-on-cubs-fine-grained-6 | ProgressiveNet | Accuracy: 78.94 |
| continual-learning-on-flowers-fine-grained-6 | ProgressiveNet | Accuracy: 93.41 |
| continual-learning-on-imagenet-fine-grained-6 | ProgressiveNet | Accuracy: 76.16 |
| continual-learning-on-sketch-fine-grained-6 | ProgressiveNet | Accuracy: 76.35 |
| continual-learning-on-stanford-cars-fine | ProgressiveNet | Accuracy: 89.21 |
| continual-learning-on-wikiart-fine-grained-6 | ProgressiveNet | Accuracy: 74.94 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.