HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation

Yufei Xu Jing Zhang Qiming Zhang Dacheng Tao

ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation

Abstract

Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art. The code and models are available at https://github.com/ViTAE-Transformer/ViTPose.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
2d-human-pose-estimation-on-human-artViTPose-h
AP: 0.468
AP (gt bbox): 0.800
2d-human-pose-estimation-on-human-artViTPose-s
AP: 0.381
AP (gt bbox): 0.738
2d-human-pose-estimation-on-human-artViTPose-l
AP: 0.459
AP (gt bbox): 0.789
2d-human-pose-estimation-on-human-artViTpose-b
AP: 0.410
AP (gt bbox): 0.759
pose-estimation-on-coco-test-devViTPose (ViTAE-G, ensemble)
AP: 81.1
AP50: 95.0
AP75: 88.2
APL: 86.0
APM: 77.8
AR: 85.6
pose-estimation-on-coco-test-devViTPose (ViTAE-G)
AP: 80.9
AP50: 94.8
AP75: 88.1
APL: 85.9
APM: 77.5
AR: 85.4
pose-estimation-on-coco-val2017ViTPose-B (Single-task_GT-bbox_256x192)
AP: 77.3
AP50: 93.5
AP75: 84.5
AR: 80.4
pose-estimation-on-coco-val2017ViTPose-B (Single-task_Det-bbox_256x192)
AP: 75.8
AP50: 90.7
AP75: 83.2
AR: 81.1
pose-estimation-on-crowdposeViTPose-G
AP: 78.3
AP Hard: 67.9
AP50: 85.3
AP75: 81.4
APM: 86.6
pose-estimation-on-ochumanViTPose (ViTAE-G, GT bounding boxes)
Test AP: 93.3
Validation AP: 92.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation | Papers | HyperAI