
摘要
这是官方的 PyTorch 实现,用于深度高分辨率表示学习的人体姿态估计。在本研究中,我们关注人体姿态估计问题,特别侧重于学习可靠的高分辨率表示。大多数现有方法是从高分辨率到低分辨率网络生成的低分辨率表示中恢复高分辨率表示。而我们提出的网络在整个过程中保持了高分辨率表示。我们从一个高分辨率子网络作为第一阶段开始,逐步添加从高分辨率到低分辨率的子网络以形成更多阶段,并将多分辨率子网络并行连接。我们进行了多次多尺度融合,使得每个从高分辨率到低分辨率的表示都能反复接收来自其他并行表示的信息,从而生成丰富的高分辨率表示。因此,预测的关键点热图在精度上可能更高,在空间上也更加精确。通过在两个基准数据集上的优越姿态估计结果,我们实证了该网络的有效性:COCO 关键点检测数据集和 MPII 人体姿态数据集。代码和模型已在 \url{https://github.com/leoxiaobin/deep-high-resolution-net.pytorch} 公开发布。
代码仓库
ducongju/HRNet
pytorch
GitHub 中提及
wsjzha/deep-high-resolution-net.pytorch
pytorch
GitHub 中提及
leeyegy/simcc
pytorch
GitHub 中提及
laowang666888/HRNET
pytorch
GitHub 中提及
open-mmlab/mmpose
pytorch
leeyegy/SimDR
pytorch
GitHub 中提及
HRNet/HRNet-Human-Pose-Estimation
pytorch
GitHub 中提及
gox-ai/hrnet-pose-api
pytorch
GitHub 中提及
HRNet/HRNet-Object-Detection
pytorch
GitHub 中提及
ken724049/action-recognition
GitHub 中提及
Mary-xl/HRnet_Kaggle_iNat2019_FGVC
pytorch
GitHub 中提及
mks0601/PoseFix_RELEASE
tf
GitHub 中提及
open-mmlab/mmdetection
pytorch
visionNoob/hrnet_pytorch
pytorch
GitHub 中提及
HRNet/HRNet-Semantic-Segmentation
pytorch
GitHub 中提及
Vill-Lab/2022-TIP-HCGA
pytorch
GitHub 中提及
NU-LL/lighttrack-
tf
GitHub 中提及
CASIA-IVA-Lab/ISP-reID
pytorch
GitHub 中提及
osmr/imgclsmob
mxnet
abhi1kumar/hrnet_pose_single_gpu
pytorch
GitHub 中提及
baoshengyu/deep-high-resolution-net.pytorch
pytorch
GitHub 中提及
k-miran/hear
GitHub 中提及
strivebo/image_segmentation_dl
tf
GitHub 中提及
sdll/hrnet-pose-estimation
pytorch
GitHub 中提及
chuanqichen/deepcoaching
pytorch
GitHub 中提及
thomasslloyd/FitSpatial
GitHub 中提及
goutern/PoseEstimation
pytorch
GitHub 中提及
HRNet/HRNet-MaskRCNN-Benchmark
pytorch
GitHub 中提及
HRNet/HRNet-Facial-Landmark-Detection
pytorch
GitHub 中提及
mindspore-lab/mindone
mindspore
leoxiaobin/deep-high-resolution-net.pytorch
官方
pytorch
GitHub 中提及
v1viswan/Domain_adaptation_in_HRNet
pytorch
GitHub 中提及
NVlabs/PAMTRI
pytorch
GitHub 中提及
anshky/HR-NET
pytorch
GitHub 中提及
HRNet/HRNet-Image-Classification
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| 2d-human-pose-estimation-on-coco-wholebody-1 | HRNet | WB: 43.2 body: 65.9 face: 52.3 foot: 31.4 hand: 30.0 |
| 2d-human-pose-estimation-on-human-art | HRNet-w48 | AP: 0.417 AP (gt bbox): 0.769 |
| 2d-human-pose-estimation-on-human-art | HRNet-w32 | AP: 0.399 AP (gt bbox): 0.754 |
| 3d-pose-estimation-on-harper | HRNet + Depth | Average MPJPE (mm): 151 |
| instance-segmentation-on-coco-minival | HTC (HRNetV2p-W48) | mask AP: 41.0 |
| keypoint-detection-on-coco | HRNet-48(384x288) | Test AP: 75.5 Validation AP: 76.3 |
| keypoint-detection-on-coco | HRNet-32 | Validation AP: 75.8 |
| keypoint-detection-on-coco-test-dev | HRNet | AP50: 92.5 AP75: 83.3 APL: 81.5 APM: 71.9 AR: 80.5 |
| keypoint-detection-on-coco-test-dev | HRNet* | AP50: 92.7 AP75: 84.5 APL: 83.1 APM: 73.4 AR: 82.0 |
| pose-estimation-on-aic | HRNet (HRNet-w32) | AP: 32.3 AP50: 76.2 AP75: 21.9 AR: 36.6 AR50: 78.9 |
| pose-estimation-on-aic | HRNet (HRNet-w48 ) | AP: 33.5 AP50: 78.0 AP75: 23.6 AR: 37.9 AR50: 80.0 |
| pose-estimation-on-brace | HRNet pre-trained on COCO | Average Precision: 0.158 Average Recall: 0.202 |
| pose-estimation-on-brace | HRNet fine-tuned on BRACE | Average Precision: 0.357 Average Recall: 0.445 |
| pose-estimation-on-coco-test-dev | HRNet-W48 + extra data | AP: 77 AP50: 92.7 AP75: 84.5 APL: 83.1 APM: 73.4 AR: 82 |
| pose-estimation-on-coco-val2017 | HRNet (256x192) | AP: 75.3 AP50: - AP75: - AR: - |
| pose-estimation-on-mpii-human-pose | HRNet-W32 | PCKh-0.5: 92.3 |
| pose-tracking-on-posetrack2017 | HRNet-W48 COCO | MOTA: 57.93 mAP: 74.95 |