HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

PoseRAC: Pose Saliency Transformer for Repetitive Action Counting

Ziyu Yao; Xuxin Cheng; Yuexian Zou

PoseRAC: Pose Saliency Transformer for Repetitive Action Counting

Abstract

This paper presents a significant contribution to the field of repetitive action counting through the introduction of a new approach called Pose Saliency Representation. The proposed method efficiently represents each action using only two salient poses instead of redundant frames, which significantly reduces the computational cost while improving the performance. Moreover, we introduce a pose-level method, PoseRAC, which is based on this representation and achieves state-of-the-art performance on two new version datasets by using Pose Saliency Annotation to annotate salient poses for training. Our lightweight model is highly efficient, requiring only 20 minutes for training on a GPU, and infers nearly 10x faster compared to previous methods. In addition, our approach achieves a substantial improvement over the previous state-of-the-art TransRAC, achieving an OBO metric of 0.56 compared to 0.29 of TransRAC. The code and new dataset are available at https://github.com/MiracleDance/PoseRAC for further research and experimentation, making our proposed approach highly accessible to the research community.

Code Repositories

miracledance/poserac
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
repetitive-action-counting-on-repcountPoseRAC
OBO: 0.560

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
PoseRAC: Pose Saliency Transformer for Repetitive Action Counting | Papers | HyperAI