HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

End-to-end Learning of Action Detection from Frame Glimpses in Videos

Serena Yeung; Olga Russakovsky; Greg Mori; Li Fei-Fei

End-to-end Learning of Action Detection from Frame Glimpses in Videos

Abstract

In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.

Code Repositories

syyeung/frameglimpses
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
action-recognition-in-videos-on-thumos14Yeung et. al.
mAP@0.1: 48.9
mAP@0.2: 44.0
mAP@0.3: 36.0
mAP@0.4: 26.4
mAP@0.5: 17.1
temporal-action-localization-on-thumos14Yeung et al.
mAP IOU@0.1: 48.9
mAP IOU@0.2: 44.0
mAP IOU@0.3: 36.0
mAP IOU@0.4: 26.4
mAP IOU@0.5: 17.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
End-to-end Learning of Action Detection from Frame Glimpses in Videos | Papers | HyperAI