HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization

Okan Köpüklü Xiangyu Wei Gerhard Rigoll

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization

Abstract

Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. Moreover, YOWO is the first and only single-stage architecture that provides competitive results on AVA dataset. We make our code and pretrained models publicly available.

Code Repositories

nuschandra/Tennis-Stroke-Detection
pytorch
Mentioned in GitHub
Stepphonwol/my_yowo
pytorch
Mentioned in GitHub
wei-tim/YOWO
Official
pytorch
Mentioned in GitHub
BoChenUIUC/YOWO
pytorch
Mentioned in GitHub
zwtu/YOWO-Paddle
paddle
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
action-detection-on-j-hmdbYOWO
Frame-mAP 0.5: 74.4
Video-mAP 0.2: 87.8
Video-mAP 0.5: 85.7
action-detection-on-j-hmdbYOWO + LFB
Frame-mAP 0.5: 75.7
Video-mAP 0.2: 88.3
Video-mAP 0.5: 85.9
action-detection-on-ucf101-24YOWO
Frame-mAP 0.5: 80.4
Video-mAP 0.1: 82.5
Video-mAP 0.2: 75.8
Video-mAP 0.5: 48.8
action-detection-on-ucf101-24YOWO + LFB
Frame-mAP 0.5: 87.3
Video-mAP 0.1: 86.1
Video-mAP 0.2: 78.6
Video-mAP 0.5: 53.1
action-recognition-in-videos-on-ava-v2-1YOWO+LFB*
mAP (Val): 19.2
action-recognition-in-videos-on-ava-v2-2YOWO+LFB*
mAP (Val): 20.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization | Papers | HyperAI