Command Palette
Search for a command to run...
Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach
Zhang Zhe ; Wang Chunyu ; Qin Wenhu ; Zeng Wenjun

Abstract
We propose to estimate 3D human pose from multi-view images and a few IMUsattached at person's limbs. It operates by firstly detecting 2D poses from thetwo signals, and then lifting them to the 3D space. We present a geometricapproach to reinforce the visual features of each pair of joints based on theIMUs. This notably improves 2D pose estimation accuracy especially when onejoint is occluded. We call this approach Orientation Regularized Network (ORN).Then we lift the multi-view 2D poses to the 3D space by an OrientationRegularized Pictorial Structure Model (ORPSM) which jointly minimizes theprojection error between the 3D and 2D poses, along with the discrepancybetween the 3D pose and IMU orientations. The simple two-step approach reducesthe error of the state-of-the-art by a large margin on a public dataset. Ourcode will be released at https://github.com/CHUNYUWANG/imu-human-pose-pytorch.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-absolute-human-pose-estimation-on-total-1 | GeoFuse | MPJPE: 24.6 |
| 3d-human-pose-estimation-on-total-capture | GeoFuse | Average MPJPE (mm): 24.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.