Command Palette
Search for a command to run...
Modeling the Background for Incremental Learning in Semantic Segmentation
Cermelli Fabio ; Mancini Massimiliano ; Bulò Samuel Rota ; Ricci Elisa ; Caputo Barbara

Abstract
Despite their effectiveness in a wide range of tasks, deep architecturessuffer from some important limitations. In particular, they are vulnerable tocatastrophic forgetting, i.e. they perform poorly when they are required toupdate their model as new classes are available but the original training setis not retained. This paper addresses this problem in the context of semanticsegmentation. Current strategies fail on this task because they do not considera peculiar aspect of semantic segmentation: since each training step providesannotation only for a subset of all possible classes, pixels of the backgroundclass (i.e. pixels that do not belong to any other classes) exhibit a semanticdistribution shift. In this work we revisit classical incremental learningmethods, proposing a new distillation-based framework which explicitly accountsfor this shift. Furthermore, we introduce a novel strategy to initializeclassifier's parameters, thus preventing biased predictions toward thebackground class. We demonstrate the effectiveness of our approach with anextensive evaluation on the Pascal-VOC 2012 and ADE20K datasets, significantlyoutperforming state of the art incremental learning methods.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| disjoint-10-1-on-pascal-voc-2012 | MiB | mIoU: 6.9 |
| disjoint-15-1-on-pascal-voc-2012 | MiB | mIoU: 39.9 |
| disjoint-15-5-on-pascal-voc-2012 | MiB | Mean IoU: 65.9 |
| overlapped-10-1-on-pascal-voc-2012 | MiB | mIoU: 20.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.