HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation

Siyuan Qiao; Yukun Zhu; Hartwig Adam; Alan Yuille; Liang-Chieh Chen

ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation

Abstract

In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.

Benchmarks

BenchmarkMethodologyMetrics
video-panoptic-segmentation-on-cityscapes-vpsVIP-Deeplab
VPQ: 63.1
VPQ (stuff): 73.0
VPQ (thing): 49.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation | Papers | HyperAI