Command Palette
Search for a command to run...
Kevin Lin Lijuan Wang Zicheng Liu

Abstract
We present a graph-convolution-reinforced transformer, named Mesh Graphormer, for 3D human pose and mesh reconstruction from a single image. Recently both transformers and graph convolutional neural networks (GCNNs) have shown promising progress in human mesh reconstruction. Transformer-based approaches are effective in modeling non-local interactions among 3D mesh vertices and body joints, whereas GCNNs are good at exploiting neighborhood vertex interactions based on a pre-specified mesh topology. In this paper, we study how to combine graph convolutions and self-attentions in a transformer to model both local and global interactions. Experimental results show that our proposed method, Mesh Graphormer, significantly outperforms the previous state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and FreiHAND datasets. Code and pre-trained models are available at https://github.com/microsoft/MeshGraphormer
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-hand-pose-estimation-on-freihand | MeshGraphormer | PA-F@15mm: 0.986 PA-F@5mm: 0.764 PA-MPJPE: 5.9 PA-MPVPE: 6.0 |
| 3d-hand-pose-estimation-on-hint-hand | MeshGraphormer | PCK@0.05 (Ego4D) All: 14.6 PCK@0.05 (Ego4D) Occ: 8.3 PCK@0.05 (Ego4D) Visible: 18.4 PCK@0.05 (New Days) All: 16.8 PCK@0.05 (NewDays) Occ: 7.9 PCK@0.05 (NewDays) Visible: 22.3 PCK@0.05 (VISOR) All: 19.1 PCK@0.05 (VISOR) Occ: 10.9 PCK@0.05 (VISOR) Visible: 23.6 |
| 3d-human-pose-estimation-on-3dpw | Mesh Graphormer | MPJPE: 74.7 MPVPE: 87.7 PA-MPJPE: 45.6 |
| 3d-human-pose-estimation-on-human36m | Mesh Graphormer | Average MPJPE (mm): 51.2 Multi-View or Monocular: Monocular PA-MPJPE: 34.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.