摘要
本研究提出了一种新型框架——“基于领域自适应中集成融合的全面优化与精炼方法(Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification, CORE-ReID)”,以解决行人重识别(Person Re-identification, ReID)中的无监督域自适应(Unsupervised Domain Adaptation, UDA)问题。在预训练阶段,该框架采用CycleGAN生成多样化数据,有效缓解不同摄像头源之间图像特征差异带来的域偏移问题。在微调阶段,基于教师-学生网络架构,框架引入多视角特征融合机制,实现多层次聚类,从而生成多样化的伪标签。为进一步提升模型学习的全面性并缓解多伪标签带来的歧义问题,本文提出一种可学习的集成融合(Ensemble Fusion)模块,该模块专注于捕捉全局特征中的细粒度局部信息。实验结果表明,在三个主流行人重识别无监督域自适应基准上,CORE-ReID均显著优于当前最先进的方法。此外,通过引入高效通道注意力模块(Efficient Channel Attention Block)与双向均值特征归一化(Bidirectional Mean Feature Normalization),有效抑制了特征偏差效应,并结合基于ResNet的模型实现了全局与局部特征的自适应融合,进一步增强了框架性能。所提出的CORE-ReID框架在融合特征表达上具备清晰性,有效避免了标签歧义,同时在平均精度均值(Mean Average Precision, mAP)、Top-1、Top-5及Top-10等关键指标上均取得优异表现,展现出卓越的识别精度,为行人重识别中的无监督域自适应问题提供了一种先进且高效的解决方案。
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| unsupervised-domain-adaptation-on-cuhk03-to | CORE-ReID | R1: 67.3 R10: 83.1 R5: 79.0 mAP: 40.4 |
| unsupervised-domain-adaptation-on-cuhk03-to-1 | CORE-ReID | R1: 93.6 R10: 98.7 R5: 97.3 mAP: 83.6 |
| unsupervised-domain-adaptation-on-duke-to | CORE-ReID | mAP: 84.4 rank-1: 93.6 rank-10: 98.7 rank-5: 97.7 |
| unsupervised-domain-adaptation-on-duke-to-1 | CORE-ReID | mAP: 45.2 rank-1: 72.2 rank-10: 86.3 rank-5: 82.9 |
| unsupervised-domain-adaptation-on-market-to | CORE-ReID | mAP: 74.8 rank-1: 84.8 rank-10: 94.4 rank-5: 92.4 |
| unsupervised-domain-adaptation-on-market-to-1 | CORE-ReID | mAP: 41.9 rank-1: 69.5 rank-10: 84.4 rank-5: 80.3 |
| unsupervised-domain-adaptation-on-market-to-6 | CORE-ReID | R1: 61.0 R10: 87.2 R5: 79.6 mAP: 62.9 |
| unsupervised-person-re-identification-on | CORE-ReID | Rank-1: 84.8 Rank-10: 94.4 Rank-5: 92.4 mAP: 74.8 |
| unsupervised-person-re-identification-on-1 | CORE-ReID | Rank-1: 93.6 Rank-10: 98.7 Rank-5: 97.7 mAP: 84.4 |
| unsupervised-person-re-identification-on-2 | CORE-ReID | Rank-1: 69.5 Rank-10: 84.4 Rank-5: 80.3 mAP: 41.9 |
| unsupervised-person-re-identification-on-3 | CORE-ReID | Rank-1: 72.2 Rank-10: 86.3 Rank-5: 82.9 mAP: 45.2 |