Command Palette
Search for a command to run...
Compacting, Picking and Growing for Unforgetting Continual Learning
Compacting, Picking and Growing for Unforgetting Continual Learning
Steven C. Y. Hung Cheng-Hao Tu Cheng-En Wu Chien-Hung Chen Yi-Ming Chan Chu-Song Chen
Abstract
Continual lifelong learning is essential to many applications. In this paper,we propose a simple but effective approach to continual deep learning. Ourapproach leverages the principles of deep model compression, critical weightsselection, and progressive networks expansion. By enforcing their integrationin an iterative manner, we introduce an incremental learning method that isscalable to the number of sequential tasks in a continual learning process. Ourapproach is easy to implement and owns several favorable characteristics.First, it can avoid forgetting (i.e., learn new tasks while remembering allprevious tasks). Second, it allows model expansion but can maintain the modelcompactness when handling sequential tasks. Besides, through our compaction andselection/expansion mechanism, we show that the knowledge accumulated throughlearning previous tasks is helpful to build a better model for the new taskscompared to training the models independently with tasks. Experimental resultsshow that our approach can incrementally learn a deep model tackling multipletasks without forgetting, while the model compactness is maintained with theperformance more satisfiable than individual task training.