HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

End-to-end Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB

Stefan Ainetter Friedrich Fraundorfer

End-to-end Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB

Abstract

In this work, we introduce a novel, end-to-end trainable CNN-based architecture to deliver high quality results for grasp detection suitable for a parallel-plate gripper, and semantic segmentation. Utilizing this, we propose a novel refinement module that takes advantage of previously calculated grasp detection and semantic segmentation and further increases grasp detection accuracy. Our proposed network delivers state-of-the-art accuracy on two popular grasp dataset, namely Cornell and Jacquard. As additional contribution, we provide a novel dataset extension for the OCID dataset, making it possible to evaluate grasp detection in highly challenging scenes. Using this dataset, we show that semantic segmentation can additionally be used to assign grasp candidates to object classes, which can be used to pick specific objects in the scene.

Code Repositories

stefan-ainetter/grasp_det_seg_cnn
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
robotic-grasping-on-cornell-grasp-dataset-1grasp_det_seg_cnn (rgb only, IW split)
5 fold cross validation: 98.2
robotic-grasping-on-jacquard-datasetgrasp_det_seg_cnn (rgb only)
Accuracy (%): 92.95

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
End-to-end Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB | Papers | HyperAI