HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous Vehicles

Ritu Yadav Axel Vierling Karsten Berns

Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous Vehicles

Abstract

This paper presents two variations of architecture referred to as RANet and BIRANet. The proposed architecture aims to use radar signal data along with RGB camera images to form a robust detection network that works efficiently, even in variable lighting and weather conditions such as rain, dust, fog, and others. First, radar information is fused in the feature extractor network. Second, radar points are used to generate guided anchors. Third, a method is proposed to improve region proposal network targets. BIRANet yields 72.3/75.3% average AP/AR on the NuScenes dataset, which is better than the performance of our base network Faster-RCNN with Feature pyramid network(FFPN). RANet gives 69.6/71.9% average AP/AR on the same dataset, which is reasonably acceptable performance. Also, both BIRANet and RANet are evaluated to be robust towards the noise.

Benchmarks

BenchmarkMethodologyMetrics
object-detection-on-nuscenesRANet(Radar)
AP(l): 73.3
AP(m): 67.8
AP(s): 44.8
AP50: 83.9
AP75: 80.1
AP85: 64.4
AR: 71.9
AR(l): 76.2
AR(m): 70.9
AR(s): 47.3
MAP: 69
object-detection-on-nuscenesBIRANet(RGB+Radar)
AP(l): 76.9
AP(m): 70.1
AP(s): 53.5
AP50: 88.9
AP75: 84.3
AP85: 65.7
AR: 75.3
AR(l): 79.8
AR(m): 73.2
AR(s): 56.2
MAP: 72.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous Vehicles | Papers | HyperAI