
摘要
多人姿态估计在野外环境中是一项具有挑战性的任务。尽管最先进的人员检测器已经展示了良好的性能,但在定位和识别方面的小误差仍无法避免。这些误差可能会导致单人姿态估计器(SPPE)失效,尤其是对于完全依赖人员检测结果的方法。本文提出了一种新颖的区域多人姿态估计(RMPE)框架,以在存在不准确的人体边界框时促进姿态估计。该框架由三个组件构成:对称空间变换网络(SSTN)、参数化姿态非极大值抑制(NMS)和姿态引导的候选框生成器(PGPG)。我们的方法能够处理不准确的边界框和冗余检测,从而在MPII(多人)数据集上相比现有最先进方法实现了17%的mAP提升。我们的模型和源代码已公开发布。
代码仓库
ManifoldFR/recvis-project
tf
GitHub 中提及
lyqcom/alphapose
mindspore
GitHub 中提及
yangyucheng000/AlphaPose
mindspore
Fangyh09/pose_nms
GitHub 中提及
MVIG-SJTU/RMPE
pytorch
GitHub 中提及
osmr/imgclsmob
mxnet
GitHub 中提及
MVIG-SJTU/AlphaPose
pytorch
MattyChoi/PoseMachines
pytorch
GitHub 中提及
2023-MindSpore-1/ms-code-22
mindspore
2023-MindSpore-1/ms-code-199
mindspore
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| 2d-human-pose-estimation-on-ochuman | RMPE | Test AP: 30.7 Validation AP: 38.8 |
| keypoint-detection-on-coco | AlphaPose | FPS: 23 Test AP: 73.3 |
| keypoint-detection-on-coco-test-dev | AlphaPose | APL: 81.5 |
| keypoint-detection-on-mpii-multi-person | AlphaPose | mAP@0.5: 82.1% |
| keypoint-detection-on-ochuman | RMPE | Test AP: 30.7 Validation AP: 38.8 |
| multi-person-pose-estimation-on-coco-test-dev | RMPE | AP: 61.8 AP50: 83.7 AP75: 69.8 APL: 67.6 APM: 58.6 |
| multi-person-pose-estimation-on-crowdpose | AlphaPose | AP Easy: 71.2 AP Hard: 51.1 AP Medium: 61.4 mAP @0.5:0.95: 61.0 |
| multi-person-pose-estimation-on-mpii-multi | AlphaPose | AP: 82.1% |
| pose-estimation-on-coco-test-dev | RMPE++ | AP: 72.3 AP50: 89.2 AP75: 79.1 APL: 78.6 APM: 68.0 |
| pose-estimation-on-coco-test-dev | RMPE | AP: 61.8 AP50: 83.7 AP75: 69.8 APL: 67.6 APM: 58.6 |
| pose-estimation-on-ochuman | RMPE | Test AP: 30.7 Validation AP: 38.8 |
| pose-estimation-on-uav-human | AlphaPose | mAP: 56.9 |