
摘要
视觉-语言模型在遥感领域的应用由于其广泛的预训练而展现出巨大的潜力。然而,这些模型在零样本场景分类中的传统使用方法仍然涉及将大图像分割为小块并进行独立预测,即归纳推理,从而因忽视了宝贵的上下文信息而限制了其有效性。我们的方法通过利用基于文本提示的初始预测以及图像编码器提供的补丁亲和关系来增强零样本能力,采用演绎推理的方式,在无需监督且计算成本较低的情况下解决了这一问题。我们在10个遥感数据集上使用最先进的视觉-语言模型进行了实验,结果表明,与归纳零样本分类相比,该方法显著提高了准确性。我们的源代码已在Github上公开:https://github.com/elkhouryk/RS-TransCLIP
代码仓库
elkhouryk/rs-transclip
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| transductive-zero-shot-classification-on-1 | RS-TransCLIP | Accuracy: 91.2 |
| transductive-zero-shot-classification-on-10 | RS-TransCLIP | Accuracy: 88.1 |
| transductive-zero-shot-classification-on-11 | RS-TransCLIP | Accuracy: 78.1 |
| transductive-zero-shot-classification-on-12 | RS-TransCLIP | Accuracy: 94.5 |
| transductive-zero-shot-classification-on-13 | RS-TransCLIP | Accuracy: 96.2 |
| transductive-zero-shot-classification-on-14 | RS-TransCLIP | Accuracy: 88 |
| transductive-zero-shot-classification-on-15 | RS-TransCLIP | Accuracy: 54.8 |
| transductive-zero-shot-classification-on-16 | RS-TransCLIP | Accuracy: 72.8 |
| transductive-zero-shot-classification-on-17 | RS-TransCLIP | Accuracy: 99.7 |
| transductive-zero-shot-classification-on-aid | RS-TransCLIP | Accuracy: 92.7 |