HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PrimeDepth: Efficient Monocular Depth Estimation with a Stable Diffusion Preimage

Denis Zavadski Damjan Kalšan Carsten Rother

PrimeDepth: Efficient Monocular Depth Estimation with a Stable Diffusion Preimage

Abstract

This work addresses the task of zero-shot monocular depth estimation. A recent advance in this field has been the idea of utilising Text-to-Image foundation models, such as Stable Diffusion. Foundation models provide a rich and generic image representation, and therefore, little training data is required to reformulate them as a depth estimation model that predicts highly-detailed depth maps and has good generalisation capabilities. However, the realisation of this idea has so far led to approaches which are, unfortunately, highly inefficient at test-time due to the underlying iterative denoising process. In this work, we propose a different realisation of this idea and present PrimeDepth, a method that is highly efficient at test time while keeping, or even enhancing, the positive aspects of diffusion-based approaches. Our key idea is to extract from Stable Diffusion a rich, but frozen, image representation by running a single denoising step. This representation, we term preimage, is then fed into a refiner network with an architectural inductive bias, before entering the downstream task. We validate experimentally that PrimeDepth is two orders of magnitude faster than the leading diffusion-based method, Marigold, while being more robust for challenging scenarios and quantitatively marginally superior. Thereby, we reduce the gap to the currently leading data-driven approach, Depth Anything, which is still quantitatively superior, but predicts less detailed depth maps and requires 20 times more labelled data. Due to the complementary nature of our approach, even a simple averaging between PrimeDepth and Depth Anything predictions can improve upon both methods and sets a new state-of-the-art in zero-shot monocular depth estimation. In future, data-driven approaches may also benefit from integrating our preimage.

Code Repositories

vislearn/PrimeDepth
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
monocular-depth-estimation-on-eth3dPrimeDepth
Delta u003c 1.25: 0.967
absolute relative error: 0.068
monocular-depth-estimation-on-kitti-eigenPrimeDepth
Delta u003c 1.25: 0.937
absolute relative error: 0.079
monocular-depth-estimation-on-kitti-eigenPrimeDepth + Depth Anything
Delta u003c 1.25: 0.953
absolute relative error: 0.073
monocular-depth-estimation-on-nyu-depth-v2PrimeDepth + Depth Anything
Delta u003c 1.25: 0.977
absolute relative error: 0.046
monocular-depth-estimation-on-nyu-depth-v2PrimeDepth
Delta u003c 1.25: 0.966
absolute relative error: 0.058

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
PrimeDepth: Efficient Monocular Depth Estimation with a Stable Diffusion Preimage | Papers | HyperAI