
摘要
本文提出了一种广义参数化对比学习方法(Generalized Parametric Contrastive Learning,简称 GPaCo/PaCo),该方法在数据分布均衡与非均衡场景下均表现出色。基于理论分析,我们发现监督对比损失倾向于偏向高频类别,从而加剧了类别不平衡学习的难度。为此,我们引入了一组可学习的类别专属参数化中心,从优化视角实现类别平衡。进一步地,我们在类别均衡设置下对 GPaCo/PaCo 损失函数进行了分析,结果表明:随着同一类样本逐渐被拉近至其对应的中心,该方法能够自适应地增强同类样本间的聚集强度,从而有效促进难样本的学习。在长尾基准数据集上的实验验证了 GPaCo/PaCo 在长尾识别任务中达到了新的最先进性能。在完整 ImageNet 数据集上,无论采用 CNN 还是视觉 Transformer 架构,使用 GPaCo 损失训练的模型均展现出优于 MAE 模型的泛化能力与更强的鲁棒性。此外,GPaCo 方法还可推广至语义分割任务,在当前四个最主流的基准数据集上均取得了显著性能提升。相关代码已开源,地址为:https://github.com/dvlab-research/Parametric-Contrastive-Learning。
代码仓库
dvlab-research/imbalanced-learning
pytorch
GitHub 中提及
dvlab-research/rescom
pytorch
GitHub 中提及
dvlab-research/parametric-contrastive-learning
官方
pytorch
GitHub 中提及
jiequancui/Parametric-Contrastive-Learning
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| domain-generalization-on-imagenet-c | GPaCo (ViT-L) | mean Corruption Error (mCE): 39.0 |
| domain-generalization-on-imagenet-r | GPaCo (ViT-L) | Top-1 Error Rate: 39.7 |
| domain-generalization-on-imagenet-sketch | GPaCo (ViT-L) | Top-1 accuracy: 48.3 |
| image-classification-on-imagenet | GPaCo (ViT-L) | Top 1 Accuracy: 86.01% |
| image-classification-on-imagenet | GPaCo (Vit-B) | Top 1 Accuracy: 84.0% |
| image-classification-on-imagenet | GPaCo (ResNet-50) | Top 1 Accuracy: 79.7% |
| image-classification-on-inaturalist-2018 | GPaCo (ResNet-152) | Top-1 Accuracy: 78.1% |
| image-classification-on-inaturalist-2018 | GPaCo (ResNet-50) | Top-1 Accuracy: 75.4% |
| long-tail-learning-on-imagenet-lt | GPaCo (2-ResNeXt101-32x4d) | Top-1 Accuracy: 63.2 |
| long-tail-learning-on-inaturalist-2018 | GPaCo (2-R152) | Top-1 Accuracy: 79.8% |
| long-tail-learning-on-inaturalist-2018 | GPaCo (ResNet-50) | Top-1 Accuracy: 75.4% |
| long-tail-learning-on-inaturalist-2018 | GPaCo (ResNet-152) | Top-1 Accuracy: 78.1% |
| long-tail-learning-on-places-lt | GPaCo (ResNet-152) | Top-1 Accuracy: 41.7 |
| semantic-segmentation-on-ade20k | GPaCo (Swin-L) | Validation mIoU: 54.3 |
| semantic-segmentation-on-pascal-context | GPaCo (ResNet101) | mIoU: 56.2 |