HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

Tim Brödermann; David Bruggemann; Christos Sakaridis; Kevin Ta; Odysseas Liagouris; Jason Corkill; Luc Van Gool

MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

Abstract

Achieving level-5 driving automation in autonomous vehicles necessitates a robust semantic visual perception system capable of parsing data from different sensors across diverse conditions. However, existing semantic perception datasets often lack important non-camera modalities typically used in autonomous vehicles, or they do not exploit such modalities to aid and improve semantic annotations in challenging conditions. To address this, we introduce MUSES, the MUlti-SEnsor Semantic perception dataset for driving in adverse conditions under increased uncertainty. MUSES includes synchronized multimodal recordings with 2D panoptic annotations for 2500 images captured under diverse weather and illumination. The dataset integrates a frame camera, a lidar, a radar, an event camera, and an IMU/GNSS sensor. Our new two-stage panoptic annotation protocol captures both class-level and instance-level uncertainty in the ground truth and enables the novel task of uncertainty-aware panoptic segmentation we introduce, along with standard semantic and panoptic segmentation. MUSES proves both effective for training and challenging for evaluating models under diverse visual conditions, and it opens new avenues for research in multimodal and uncertainty-aware dense semantic perception. Our dataset and benchmark are publicly available at https://muses.vision.ee.ethz.ch.

Code Repositories

timbroed/MUSES
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
object-detection-on-muses-multi-sensorMask2Former (R50)
AP: 28.14
panoptic-segmentation-on-muses-multi-sensor-1MUSES (Mask2Former /w 4xSwin-T)
PQ: 53.6
semantic-segmentation-on-muses-multi-sensorMask2Former (Swin-T)
mIoU: 70.74
uncertainty-aware-panoptic-segmentation-onMask2Former (Swin-T)
AUPQ: 44.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty | Papers | HyperAI