HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PE-former: Pose Estimation Transformer

Paschalis Panteleris Antonis Argyros

PE-former: Pose Estimation Transformer

Abstract

Vision transformer architectures have been demonstrated to work very effectively for image classification tasks. Efforts to solve more challenging vision tasks with transformers rely on convolutional backbones for feature extraction. In this paper we investigate the use of a pure transformer architecture (i.e., one with no CNN backbone) for the problem of 2D body pose estimation. We evaluate two ViT architectures on the COCO dataset. We demonstrate that using an encoder-decoder transformer architecture yields state of the art results on this estimation problem.

Code Repositories

padeler/pe-former
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
pose-estimation-on-cocoPEFORMER-Xcit-dino-p8
AP: 72.6
AR: 79.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
PE-former: Pose Estimation Transformer | Papers | HyperAI