Command Palette
Search for a command to run...
Fusing Event-based and RGB camera for Robust Object Detection in Adverse Conditions
{Christian Laugier Alessandro Renzaglia Khushdeep Singh Mann Anshul Paigwar Abhishek Tomy}

Abstract
The ability to detect objects, under image corruptions and different weather conditions is vital for deep learning models especially when applied to real-world applications such as autonomous driving. Traditional RGB-based detection fails under these conditions and it is thus important to design a sensor suite that is redundant to failures of the primary frame-based detection. Event-based cameras can complement frame-based cameras in low-light conditions and high dynamic range scenarios that an autonomous vehicle can encounter during navigation. Accordingly, we propose a redundant sensor fusion model of event-based and frame-based cameras that is robust to common image corruptions. The method utilizes a voxel grid representation for events as input and proposes a two-parallel feature extractor network for frames and events. Our sensor fusion approach is more robust to corruptions by over 30% compared to only frame-based detections and outperforms the only event-based detection. The model is trained and evaluated on the publicly released DSEC dataset.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| object-detection-on-dsec | FPN-Fusion | mAP: 24.4 |
| object-detection-on-eventped | FPN-Fusion | AP: 61.1 |
| object-detection-on-inoutdoor | FPN-Fusion | AP: 60.1 |
| object-detection-on-pku-ddd17-car | FPN-Fusion | mAP50: 81.9 |
| object-detection-on-stcrowd | FPN-Fusion | AP: 61.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.