HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Privileged Knowledge Distillation for Online Action Detection

Zhao Peisen ; Xie Lingxi ; Zhang Ya ; Wang Yanfeng ; Tian Qi

Privileged Knowledge Distillation for Online Action Detection

Abstract

Online Action Detection (OAD) in videos is proposed as a per-frame labelingtask to address the real-time prediction tasks that can only obtain theprevious and current video frames. This paper presents a novellearning-with-privileged based framework for online action detection where thefuture frames only observable at the training stages are considered as a formof privileged information. Knowledge distillation is employed to transfer theprivileged information from the offline teacher to the online student. We notethat this setting is different from conventional KD because the differencebetween the teacher and student models mostly lies in input data rather thanthe network architecture. We propose Privileged Knowledge Distillation (PKD)which (i) schedules a curriculum learning procedure and (ii) inserts auxiliarynodes to the student model, both for shrinking the information gap andimproving learning performance. Compared to other OAD methods that explicitlypredict future frames, our approach avoids learning unpredictable unnecessaryyet inconsistent visual contents and achieves state-of-the-art accuracy on twopopular OAD benchmarks, TVSeries and THUMOS14.

Benchmarks

BenchmarkMethodologyMetrics
online-action-detection-on-tvseriesPKD
mCAP: 86.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Privileged Knowledge Distillation for Online Action Detection | Papers | HyperAI