Command Palette
Search for a command to run...
End-to-End Spatio-Temporal Action Localisation with Video Transformers
Alexey Gritsenko; Xuehan Xiong; Josip Djolonga; Mostafa Dehghani; Chen Sun; Mario Lučić; Cordelia Schmid; Anurag Arnab

Abstract
The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, purely-transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art results on four different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-detection-on-ucf101-24 | STAR/L | Frame-mAP 0.5: 90.3 Video-mAP 0.2: 88.0 Video-mAP 0.5: 71.8 |
| action-recognition-in-videos-on-ava-v21 | STAR/L | mAP (Val): 41.7 |
| action-recognition-on-ava-v2-2 | STAR/L | mAP: 41.7 |
| spatio-temporal-action-localization-on-ava | STAR/L | val mAP: 41.7 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.