HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Do Not Sleep on Traditional Machine Learning: Simple and Interpretable Techniques Are Competitive to Deep Learning for Sleep Scoring

Jeroen Van Der Donckt; Jonas Van Der Donckt; Emiel Deprost; Nicolas Vandenbussche; Michael Rademaker; Gilles Vandewiele; Sofie Van Hoecke

Do Not Sleep on Traditional Machine Learning: Simple and Interpretable Techniques Are Competitive to Deep Learning for Sleep Scoring

Abstract

Over the last few years, research in automatic sleep scoring has mainly focused on developing increasingly complex deep learning architectures. However, recently these approaches achieved only marginal improvements, often at the expense of requiring more data and more expensive training procedures. Despite all these efforts and their satisfactory performance, automatic sleep staging solutions are not widely adopted in a clinical context yet. We argue that most deep learning solutions for sleep scoring are limited in their real-world applicability as they are hard to train, deploy, and reproduce. Moreover, these solutions lack interpretability and transparency, which are often key to increase adoption rates. In this work, we revisit the problem of sleep stage classification using classical machine learning. Results show that competitive performance can be achieved with a conventional machine learning pipeline consisting of preprocessing, feature extraction, and a simple machine learning model. In particular, we analyze the performance of a linear model and a non-linear (gradient boosting) model. Our approach surpasses state-of-the-art (that uses the same data) on two public datasets: Sleep-EDF SC-20 (MF1 0.810) and Sleep-EDF ST (MF1 0.795), while achieving competitive results on Sleep-EDF SC-78 (MF1 0.775) and MASS SS3 (MF1 0.817). We show that, for the sleep stage scoring task, the expressiveness of an engineered feature vector is on par with the internally learned representations of deep learning models. This observation opens the door to clinical adoption, as a representative feature vector allows to leverage both the interpretability and successful track record of traditional machine learning models.

Code Repositories

predict-idlab/sleep-linear
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
multimodal-sleep-stage-detection-on-sleep-edfLinear model
Accuracy: 85.7%
Cohen's kappa: 0.806
Macro-F1: 0.809
multimodal-sleep-stage-detection-on-sleep-edfCatBoost
Accuracy: 86.4%
Cohen's kappa: 0.812
Macro-F1: 0.802
multimodal-sleep-stage-detection-on-sleep-edf-1CatBoost
Macro-F1: 0.795
Accuracy: 83.6%
Cohen's kappa: 0.765
multimodal-sleep-stage-detection-on-sleep-edf-1Linear model
Macro-F1: 0.792
Accuracy: 82.9%
Cohen's kappa: 0.759
sleep-stage-detection-on-mass-ss3CatBoost
Accuracy: 86.7%
Cohen's kappa: 0.803
Macro-F1: 0.817
sleep-stage-detection-on-sleep-edfCatBoost
Accuracy: 86.6%
Cohen's kappa: 0.816
Macro-F1: 0.810
sleep-stage-detection-on-sleep-edfLinear model
Accuracy: 86.3%
Cohen's kappa: 0.813
Macro-F1: 0.805

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Do Not Sleep on Traditional Machine Learning: Simple and Interpretable Techniques Are Competitive to Deep Learning for Sleep Scoring | Papers | HyperAI