HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Revisiting Fine-tuning for Few-shot Learning

Nakamura Akihiro ; Harada Tatsuya

Revisiting Fine-tuning for Few-shot Learning

Abstract

Few-shot learning is the process of learning novel classes using only a fewexamples and it remains a challenging task in machine learning. Manysophisticated few-shot learning algorithms have been proposed based on thenotion that networks can easily overfit to novel examples if they are simplyfine-tuned using only a few examples. In this study, we show that in thecommonly used low-resolution mini-ImageNet dataset, the fine-tuning methodachieves higher accuracy than common few-shot learning algorithms in the 1-shottask and nearly the same accuracy as that of the state-of-the-art algorithm inthe 5-shot task. We then evaluate our method with more practical tasks, namelythe high-resolution single-domain and cross-domain tasks. With both tasks, weshow that our method achieves higher accuracy than common few-shot learningalgorithms. We further analyze the experimental results and show that: 1) theretraining process can be stabilized by employing a low learning rate, 2) usingadaptive gradient optimizers during fine-tuning can increase test accuracy, and3) test accuracy can be improved by updating the entire network when a largedomain-shift exists between base and novel classes.

Benchmarks

BenchmarkMethodologyMetrics
category-agnostic-pose-estimation-on-mp100Finetune
Mean PCK@0.2 - 1shot: 63.58

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Revisiting Fine-tuning for Few-shot Learning | Papers | HyperAI