Command Palette
Search for a command to run...
Nikita Karaev Ignacio Rocco Benjamin Graham Natalia Neverova Andrea Vedaldi Christian Rupprecht

Abstract
We introduce CoTracker, a transformer-based model that tracks a large number of 2D points in long video sequences. Differently from most existing approaches that track points independently, CoTracker tracks them jointly, accounting for their dependencies. We show that joint tracking significantly improves tracking accuracy and robustness, and allows CoTracker to track occluded points and points outside of the camera view. We also introduce several innovations for this class of trackers, including using token proxies that significantly improve memory efficiency and allow CoTracker to track 70k points jointly and simultaneously at inference on a single GPU. CoTracker is an online algorithm that operates causally on short windows. However, it is trained utilizing unrolled windows as a recurrent network, maintaining tracks for long periods of time even when points are occluded or leave the field of view. Quantitatively, CoTracker substantially outperforms prior trackers on standard point-tracking benchmarks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| point-tracking-on-tap-vid-davis | CoTracker | Average Jaccard: 65.9 Average PCK: 79.4 Occlusion Accuracy: 89.9 |
| point-tracking-on-tap-vid-davis-first | CoTracker | Average Jaccard: 62.2 Average PCK: 75.7 Occlusion Accuracy: 89.3 |
| point-tracking-on-tap-vid-kinetics-first | CoTracker | Average Jaccard: 48.8 Average PCK: 64.5 Occlusion Accuracy: 85.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.