Command Palette
Search for a command to run...
Damien Robert Hugo Raguet Loic Landrieu

Abstract
We introduce a novel superpoint-based transformer architecture for efficient semantic segmentation of large-scale 3D scenes. Our method incorporates a fast algorithm to partition point clouds into a hierarchical superpoint structure, which makes our preprocessing 7 times faster than existing superpoint-based approaches. Additionally, we leverage a self-attention mechanism to capture the relationships between superpoints at multiple scales, leading to state-of-the-art performance on three challenging benchmark datasets: S3DIS (76.0% mIoU 6-fold validation), KITTI-360 (63.5% on Val), and DALES (79.6%). With only 212k parameters, our approach is up to 200 times more compact than other state-of-the-art models while maintaining similar performance. Furthermore, our model can be trained on a single GPU in 3 hours for a fold of the S3DIS dataset, which is 7x to 70x fewer GPU-hours than the best-performing methods. Our code and models are accessible at github.com/drprojects/superpoint_transformer.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-semantic-segmentation-on-dales | Superpoint Transformer | Model size: 212K Overall Accuracy: 97.5 mIoU: 79.6 |
| 3d-semantic-segmentation-on-kitti-360 | Superpoint Transformer | Model size: 777K miou Val: 63.5 |
| 3d-semantic-segmentation-on-s3dis | Superpoint Transformer | mAcc: 85.8 mIoU (6-Fold): 76.0 |
| semantic-segmentation-on-s3dis | Superpoint Transformer | Mean IoU: 76.0 Number of params: 0.212M Params (M): 0.212 mAcc: 85.8 mIoU: 76.0 oAcc: 90.4 |
| semantic-segmentation-on-s3dis-area5 | Superpoint Transformer | Number of params: 212K mAcc: 77.3 mIoU: 68.9 oAcc: 89.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.