HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation

Neel Kanwal Trygve Eftestol Farbod Khoraminia Tahlita CM Zuiverloon Kjersti Engan

Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation

Abstract

Computational Pathology (CPATH) systems have the potential to automate diagnostic tasks. However, the artifacts on the digitized histological glass slides, known as Whole Slide Images (WSIs), may hamper the overall performance of CPATH systems. Deep Learning (DL) models such as Vision Transformers (ViTs) may detect and exclude artifacts before running the diagnostic algorithm. A simple way to develop robust and generalized ViTs is to train them on massive datasets. Unfortunately, acquiring large medical datasets is expensive and inconvenient, prompting the need for a generalized artifact detection method for WSIs. In this paper, we present a student-teacher recipe to improve the classification performance of ViT for the air bubbles detection task. ViT, trained under the student-teacher framework, boosts its performance by distilling existing knowledge from the high-capacity teacher model. Our best-performing ViT yields 0.961 and 0.911 F1-score and MCC, respectively, observing a 7% gain in MCC against stand-alone training. The proposed method presents a new perspective of leveraging knowledge distillation over transfer learning to encourage the use of customized transformers for efficient preprocessing pipelines in the CPATH systems.

Benchmarks

BenchmarkMethodologyMetrics
artifact-detection-on-histoartifactsKD-based ViT-Tiny
ACC: 0.956
F1: 0.961
MCC: 0.911

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Vision Transformers for Small Histological Datasets Learned through Knowledge Distillation | Papers | HyperAI