Command Palette
Search for a command to run...
SPIdepth: Strengthened Pose Information for Self-supervised Monocular Depth Estimation
Mykola Lavreniuk

Abstract
Self-supervised monocular depth estimation has garnered considerable attention for its applications in autonomous driving and robotics. While recent methods have made strides in leveraging techniques like the Self Query Layer (SQL) to infer depth from motion, they often overlook the potential of strengthening pose information. In this paper, we introduce SPIdepth, a novel approach that prioritizes enhancing the pose network for improved depth estimation. Building upon the foundation laid by SQL, SPIdepth emphasizes the importance of pose information in capturing fine-grained scene structures. By enhancing the pose network's capabilities, SPIdepth achieves remarkable advancements in scene understanding and depth estimation. Experimental results on benchmark datasets such as KITTI, Cityscapes, and Make3D showcase SPIdepth's state-of-the-art performance, surpassing previous methods by significant margins. Specifically, SPIdepth tops the self-supervised KITTI benchmark. Additionally, SPIdepth achieves the lowest AbsRel (0.029), SqRel (0.069), and RMSE (1.394) on KITTI, establishing new state-of-the-art results. On Cityscapes, SPIdepth shows improvements over SQLdepth of 21.7% in AbsRel, 36.8% in SqRel, and 16.5% in RMSE, even without using motion masks. On Make3D, SPIdepth in zero-shot outperforms all other models. Remarkably, SPIdepth achieves these results using only a single image for inference, surpassing even methods that utilize video sequences for inference, thus demonstrating its efficacy and efficiency in real-world applications. Our approach represents a significant leap forward in self-supervised monocular depth estimation, underscoring the importance of strengthening pose information for advancing scene understanding in real-world applications. The code and pre-trained models are publicly available at https://github.com/Lavreniuk/SPIdepth.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| monocular-depth-estimation-on-kitti-eigen | SPIDepth | Delta u003c 1.25: 0.99 Delta u003c 1.25^2: 0.999 Delta u003c 1.25^3: 1.000 RMSE: 1.394 RMSE log: 0.048 Sq Rel: 0.069 absolute relative error: 0.029 |
| monocular-depth-estimation-on-kitti-eigen-1 | SPIDepth(MS+1024x320) | Delta u003c 1.25: 0.94 Delta u003c 1.25^2: 0.973 Delta u003c 1.25^3: 0.985 Mono: X RMSE: 3.662 RMSE log: 0.153 Resolution: 1024x320 Sq Rel: 0.531 absolute relative error: 0.071 |
| monocular-depth-estimation-on-make3d | SPIDepth | Abs Rel: 0.299 RMSE: 6.672 Sq Rel: 1.931 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.