HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition

Jinfu Liu Chen Chen Mengyuan Liu

Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition

Abstract

Skeleton-based action recognition has garnered significant attention due to the utilization of concise and resilient skeletons. Nevertheless, the absence of detailed body information in skeletons restricts performance, while other multimodal methods require substantial inference resources and are inefficient when using multimodal data during both training and inference stages. To address this and fully harness the complementary multimodal features, we propose a novel multi-modality co-learning (MMCL) framework by leveraging the multimodal large language models (LLMs) as auxiliary networks for efficient skeleton-based action recognition, which engages in multi-modality co-learning during the training stage and keeps efficiency by employing only concise skeletons in inference. Our MMCL framework primarily consists of two modules. First, the Feature Alignment Module (FAM) extracts rich RGB features from video frames and aligns them with global skeleton features via contrastive learning. Second, the Feature Refinement Module (FRM) uses RGB images with temporal information and text instruction to generate instructive features based on the powerful generalization of multimodal LLMs. These instructive text features will further refine the classification scores and the refined scores will enhance the model's robustness and generalization in a manner similar to soft labels. Extensive experiments on NTU RGB+D, NTU RGB+D 120 and Northwestern-UCLA benchmarks consistently verify the effectiveness of our MMCL, which outperforms the existing skeleton-based action recognition methods. Meanwhile, experiments on UTD-MHAD and SYSU-Action datasets demonstrate the commendable generalization of our MMCL in zero-shot and domain-adaptive action recognition. Our code is publicly available at: https://github.com/liujf69/MMCL-Action.

Code Repositories

liujf69/MMCL-Action
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
skeleton-based-action-recognition-on-n-uclaMMCL
Accuracy: 97.5
skeleton-based-action-recognition-on-ntu-rgbdMMCL
Accuracy (CS): 93.5
Accuracy (CV): 97.4
Ensembled Modalities: 6
skeleton-based-action-recognition-on-ntu-rgbd-1MMCL
Accuracy (Cross-Setup): 91.7
Accuracy (Cross-Subject): 90.3
Ensembled Modalities: 6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Multi-Modality Co-Learning for Efficient Skeleton-based Action Recognition | Papers | HyperAI