HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

D3Former: Debiased Dual Distilled Transformer for Incremental Learning

Abdelrahman Mohamed Rushali Grandhe K J Joseph Salman Khan Fahad Khan

D3Former: Debiased Dual Distilled Transformer for Incremental Learning

Abstract

In class incremental learning (CIL) setting, groups of classes are introduced to a model in each learning phase. The goal is to learn a unified model performant on all the classes observed so far. Given the recent popularity of Vision Transformers (ViTs) in conventional classification settings, an interesting question is to study their continual learning behaviour. In this work, we develop a Debiased Dual Distilled Transformer for CIL dubbed $\textrm{D}^3\textrm{Former}$. The proposed model leverages a hybrid nested ViT design to ensure data efficiency and scalability to small as well as large datasets. In contrast to a recent ViT based CIL approach, our $\textrm{D}^3\textrm{Former}$ does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks. The improved CIL behaviour of $\textrm{D}^3\textrm{Former}$ owes to two fundamental changes to the ViT design. First, we treat the incremental learning as a long-tail classification problem where the majority samples from new classes vastly outnumber the limited exemplars available for old classes. To avoid the bias against the minority old classes, we propose to dynamically adjust logits to emphasize on retaining the representations relevant to old tasks. Second, we propose to preserve the configuration of spatial attention maps as the learning progresses across tasks. This helps in reducing catastrophic forgetting by constraining the model to retain the attention on the most discriminative regions. $\textrm{D}^3\textrm{Former}$ obtains favorable results on incremental versions of CIFAR-100, MNIST, SVHN, and ImageNet datasets. Code is available at https://tinyurl.com/d3former

Code Repositories

abdohelmy/D-3Former
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
incremental-learning-on-cifar-100-50-classes-1D3Former
Average Incremental Accuracy: 68.68
incremental-learning-on-cifar-100-50-classes-2D3Former
Average Incremental Accuracy: 70.94
incremental-learning-on-cifar-100-50-classes-3D3Former
Average Incremental Accuracy: 72.23

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
D3Former: Debiased Dual Distilled Transformer for Incremental Learning | Papers | HyperAI