HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers

Angel Martínez-González Michael Villamizar Jean-Marc Odobez

Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers

Abstract

We propose to leverage Transformer architectures for non-autoregressive human motion prediction. Our approach decodes elements in parallel from a query sequence, instead of conditioning on previous predictions such as instate-of-the-art RNN-based approaches. In such a way our approach is less computational intensive and potentially avoids error accumulation to long term elements in the sequence. In that context, our contributions are fourfold: (i) we frame human motion prediction as a sequence-to-sequence problem and propose a non-autoregressive Transformer to infer the sequences of poses in parallel; (ii) we propose to decode sequences of 3D poses from a query sequence generated in advance with elements from the input sequence;(iii) we propose to perform skeleton-based activity classification from the encoder memory, in the hope that identifying the activity can improve predictions;(iv) we show that despite its simplicity, our approach achieves competitive results in two public datasets, although surprisingly more for short term predictions rather than for long term ones.

Code Repositories

idiap/potr
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
classification-on-full-body-parkinsonsPose Transformers (POTR)
F1-score (weighted): 0.46

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers | Papers | HyperAI