HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

Yangji He Weihan Liang Dongyang Zhao Hong-Yu Zhou Weifeng Ge Yizhou Yu Wenqiang Zhang

Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

Abstract

This paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS. The trained weights and codes are available at https://github.com/StomachCold/HCTransformers.

Code Repositories

stomachcold/hctransformers
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
few-shot-image-classification-on-cifar-fs-5HCTransformers
Accuracy: 78.89
few-shot-image-classification-on-cifar-fs-5-1HCTransformers
Accuracy: 90.50
few-shot-image-classification-on-fc100-5-wayHCTransformers
Accuracy: 48.27
few-shot-image-classification-on-fc100-5-way-1HCTransformers
Accuracy: 66.42
few-shot-image-classification-on-mini-2HCTransformers
Accuracy: 74.74
few-shot-image-classification-on-mini-3HCTransformers
Accuracy: 89.19
few-shot-image-classification-on-tieredHCTransformers
Accuracy: 79.67
few-shot-image-classification-on-tiered-1HCTransformers
Accuracy: 91.72
few-shot-learning-on-mini-imagenet-1-shot-2HCTransformers
Acc: 74.74
few-shot-learning-on-mini-imagenet-5-way-1HCTransformers
5 way 1~2 shot: 74.74

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp