Command Palette
Search for a command to run...
SparseTrack: Multi-Object Tracking by Performing Scene Decomposition based on Pseudo-Depth
Zelin Liu Xinggang Wang Cheng Wang Wenyu Liu Xiang Bai

Abstract
Exploring robust and efficient association methods has always been an important issue in multiple-object tracking (MOT). Although existing tracking methods have achieved impressive performance, congestion and frequent occlusions still pose challenging problems in multi-object tracking. We reveal that performing sparse decomposition on dense scenes is a crucial step to enhance the performance of associating occluded targets. To this end, we propose a pseudo-depth estimation method for obtaining the relative depth of targets from 2D images. Secondly, we design a depth cascading matching (DCM) algorithm, which can use the obtained depth information to convert a dense target set into multiple sparse target subsets and perform data association on these sparse target subsets in order from near to far. By integrating the pseudo-depth method and the DCM strategy into the data association process, we propose a new tracker, called SparseTrack. SparseTrack provides a new perspective for solving the challenging crowded scene MOT problem. Only using IoU matching, SparseTrack achieves comparable performance with the state-of-the-art (SOTA) methods on the MOT17 and MOT20 benchmarks. Code and models are publicly available at \url{https://github.com/hustvl/SparseTrack}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| multi-object-tracking-on-dancetrack | SparseTrack | AssA: 39.3 DetA: 79.2 HOTA: 55.7 IDF1: 58.1 MOTA: 91.3 |
| multi-object-tracking-on-mot17 | SparseTrack | HOTA: 65.1 IDF1: 80.1 MOTA: 81.0 |
| multi-object-tracking-on-mot20-1 | SparseTrack | HOTA: 63.4 IDF1: 77.3 MOTA: 78.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.