Command Palette
Search for a command to run...
Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation
Mu Hu Wei Yin Chi Zhang Zhipeng Cai Xiaoxiao Long Kaixuan Wang Hao Chen Gang Yu Chunhua Shen Shaojie Shen

Abstract
We introduce Metric3D v2, a geometric foundation model for zero-shot metric depth and surface normal estimation from a single image, which is crucial for metric 3D recovery. While depth and normal are geometrically related and highly complimentary, they present distinct challenges. SoTA monocular depth methods achieve zero-shot generalization by learning affine-invariant depths, which cannot recover real-world metrics. Meanwhile, SoTA normal estimation methods have limited zero-shot performance due to the lack of large-scale labeled data. To tackle these issues, we propose solutions for both metric depth estimation and surface normal estimation. For metric depth estimation, we show that the key to a zero-shot single-view model lies in resolving the metric ambiguity from various camera models and large-scale data training. We propose a canonical camera space transformation module, which explicitly addresses the ambiguity problem and can be effortlessly plugged into existing monocular models. For surface normal estimation, we propose a joint depth-normal optimization module to distill diverse data knowledge from metric depth, enabling normal estimators to learn beyond normal labels. Equipped with these modules, our depth-normal models can be stably trained with over 16 million of images from thousands of camera models with different-type annotations, resulting in zero-shot generalization to in-the-wild images with unseen camera settings. Our method enables the accurate recovery of metric 3D structures on randomly collected internet images, paving the way for plausible single-image metrology. Our project page is at https://JUGGHM.github.io/Metric3Dv2.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| monocular-depth-estimation-on-ibims-1 | Metric3D-v2(L, ZS) | δ1.25: 0.969 |
| monocular-depth-estimation-on-kitti-eigen | Metric3Dv2 (g2, FT, 80m, flip_aug_test) | Delta u003c 1.25: 0.989 Delta u003c 1.25^2: 0.998 Delta u003c 1.25^3: 1.000 RMSE: 1.766 RMSE log: 0.060 absolute relative error: 0.039 |
| monocular-depth-estimation-on-nyu-depth-v2 | Metric3Dv2(L, FT) | Delta u003c 1.25: 0.989 Delta u003c 1.25^2: 0.998 Delta u003c 1.25^3: 1.000 RMSE: 0.183 absolute relative error: 0.047 log 10: 0.020 |
| surface-normals-estimation-on-ibims-1 | Metric3Dv2(g2, ZS) | % u003c 11.25: 69.7 % u003c 22.5: 76.2 % u003c 30: 78.8 Mean: 19.6 |
| surface-normals-estimation-on-nyu-depth-v2-1 | Metric3Dv2(L, FT) | % u003c 11.25: 68.8 % u003c 22.5: 84.9 % u003c 30: 89.8 Mean Angle Error: 12.0 RMSE: 19.2 |
| surface-normals-estimation-on-scannetv2 | Metric3Dv2 (g2, In-domain) | % u003c 11.25: 77.8 % u003c 22.5: 90.1 % u003c 30: 93.5 Mean Angle Error: 9.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.