HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Diffusion-based RGB-D Semantic Segmentation with Deformable Attention Transformer

Minh Bui Kostas Alexis

Diffusion-based RGB-D Semantic Segmentation with Deformable Attention Transformer

Abstract

Vision-based perception and reasoning is essential for scene understanding in any autonomous system. RGB and depth images are commonly used to capture both the semantic and geometric features of the environment. Developing methods to reliably interpret this data is critical for real-world applications, where noisy measurements are often unavoidable. In this work, we introduce a diffusion-based framework to address the RGB-D semantic segmentation problem. Additionally, we demonstrate that utilizing a Deformable Attention Transformer as the encoder to extract features from depth images effectively captures the characteristics of invalid regions in depth measurements. Our generative framework shows a greater capacity to model the underlying distribution of RGB-D images, achieving robust performance in challenging scenarios with significantly less training time compared to discriminative methods. Experimental results indicate that our approach achieves State-of-the-Art performance on both the NYUv2 and SUN-RGBD datasets in general and especially in the most challenging of their image data. Our project page will be available at https://diffusionmms.github.io/

Benchmarks

BenchmarkMethodologyMetrics
semantic-segmentation-on-nyu-depth-v2DiffusionMMS (DAT++-S)
Mean IoU: 61.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Diffusion-based RGB-D Semantic Segmentation with Deformable Attention Transformer | Papers | HyperAI