HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection

Min Yang Guo Chen Yin-Dong Zheng Tong Lu Limin Wang

BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection

Abstract

Temporal action detection (TAD) is extensively studied in the video understanding community by generally following the object detection pipeline in images. However, complex designs are not uncommon in TAD, such as two-stream feature extraction, multi-stage training, complex temporal modeling, and global context fusion. In this paper, we do not aim to introduce any novel technique for TAD. Instead, we study a simple, straightforward, yet must-known baseline given the current status of complex design and low detection efficiency in TAD. In our simple baseline (termed BasicTAD), we decompose the TAD pipeline into several essential components: data sampling, backbone design, neck construction, and detection head. We extensively investigate the existing techniques in each component for this baseline, and more importantly, perform end-to-end training over the entire pipeline thanks to the simplicity of design. As a result, this simple BasicTAD yields an astounding and real-time RGB-Only baseline very close to the state-of-the-art methods with two-stream inputs. In addition, we further improve the BasicTAD by preserving more temporal and spatial information in network representation (termed as PlusTAD). Empirical results demonstrate that our PlusTAD is very efficient and significantly outperforms the previous methods on the datasets of THUMOS14 and FineAction. Meanwhile, we also perform in-depth visualization and error analysis on our proposed method and try to provide more insights on the TAD problem. Our approach can serve as a strong baseline for future TAD research. The code and model will be released at https://github.com/MCG-NJU/BasicTAD.

Code Repositories

cg1177/dcan
pytorch
Mentioned in GitHub
mcg-nju/basictad
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
temporal-action-localization-on-thumos14BasicTAD (160,6,192,R50-SlowOnly)
Avg mAP (0.3:0.7): 59.6
mAP IOU@0.3: 75.5
mAP IOU@0.4: 70.8
mAP IOU@0.5: 63.5
mAP IOU@0.6: 50.9
mAP IOU@0.7: 37.4
temporal-action-localization-on-thumos14BasicTAD (112,3,96,R50-SlowOnly)
Avg mAP (0.3:0.7): 54.9
mAP IOU@0.3: 68.4
mAP IOU@0.4: 65.0
mAP IOU@0.5: 58.6
mAP IOU@0.6: 49.2
mAP IOU@0.7: 33.5
temporal-action-localization-on-thumos14-2BasicTAD (R50-SlowOnly)
Avg mAP (0.3:0.7): 59.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
BasicTAD: an Astounding RGB-Only Baseline for Temporal Action Detection | Papers | HyperAI