HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust Universal Lesion Detection

Sheoran Manu ; Dani Meghal ; Sharma Monika ; Vig Lovekesh

DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust
  Universal Lesion Detection

Abstract

Incorporating data-specific domain knowledge in deep networks explicitly canprovide important cues beneficial for lesion detection and can mitigate theneed for diverse heterogeneous datasets for learning robust detectors. In thispaper, we exploit the domain information present in computed tomography (CT)scans and propose a robust universal lesion detection (ULD) network that candetect lesions across all organs of the body by training on a single dataset,DeepLesion. We analyze CT-slices of varying intensities, generated usingheuristically determined Hounsfield Unit(HU) windows that individuallyhighlight different organs and are given as inputs to the deep network. Thefeatures obtained from the multiple intensity images are fused using a novelconvolution augmented multi-head self-attention module and subsequently, passedto a Region Proposal Network (RPN) for lesion detection. In addition, weobserved that traditional anchor boxes used in RPN for natural images are notsuitable for lesion sizes often found in medical images. Therefore, we proposeto use lesion-specific anchor sizes and ratios in the RPN for improving thedetection performance. We use self-supervision to initialize weights of ournetwork on the DeepLesion dataset to further imbibe domain knowledge. Ourproposed Domain Knowledge augmented Multi-head Attention based Universal LesionDetection Network DMKA-ULD produces refined and precise bounding boxes aroundlesions across different organs. We evaluate the efficacy of our network on thepublicly available DeepLesion dataset which comprises of approximately 32K CTscans with annotated lesions across all organs of the body. Results demonstratethat we outperform existing state-of-the-art methods achieving an overallsensitivity of 87.16%.

Benchmarks

BenchmarkMethodologyMetrics
medical-object-detection-on-deeplesionDKMA-ULD
Sensitivity: 87.16

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust Universal Lesion Detection | Papers | HyperAI