Command Palette
Search for a command to run...
Continual Learning
Continual Learning, also known as Incremental Learning or Life-long Learning, refers to model training methods that sequentially learn multiple tasks while retaining knowledge from previous tasks. This approach involves validating the model with task identifiers (task-ids) during new task training, without accessing data from old tasks. Continual Learning aims to enhance a model's adaptability in dynamic environments and holds significant application value, especially in scenarios where data is constantly changing.
ASC (19 tasks)
CTR
visual domain decathlon (10 tasks)
Res. adapt. decay
Cifar100 (20 tasks)
Model Zoo-Continual
Tiny-ImageNet (10tasks)
ALTA-ViTB/16
F-CelebA (10 tasks)
CAT (CNN backbone)
Wikiart (Fine-grained 6 Tasks)
CUBS (Fine-grained 6 Tasks)
CondConvContinual
Stanford Cars (Fine-grained 6 Tasks)
CPG
Flowers (Fine-grained 6 Tasks)
CondConvContinual
DSC (10 tasks)
CTR
Sketch (Fine-grained 6 Tasks)
20Newsgroup (10 tasks)
ImageNet (Fine-grained 6 Tasks)
CondConvContinual
ImageNet-50 (5 tasks)
CondConvContinual
Cifar100 (10 tasks)
RMN (Resnet)
Permuted MNIST
RMN
split CIFAR-100
Coarse-CIFAR100
Model Zoo-Continual
Split MNIST (5 tasks)
H$^{2}$
5-Datasets
mini-Imagenet (20 tasks) - 1 epoch
TAG-RMSProp
Split CIFAR-10 (5 tasks)
H$^{2}$
MiniImageNet ResNet-18 - 300 Epochs
5-dataset - 1 epoch
CIFAR-100 AlexNet - 300 Epoch
TinyImageNet ResNet-18 - 300 Epochs
CIFAR-100 ResNet-18 - 300 Epochs
IBM
miniImagenet
Rotated MNIST
Model Zoo-Continual
Cifar100 (20 tasks) - 1 epoch
CUB-200-2011 (20 tasks) - 1 epoch
MLT17