HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

How to train your MAML

Antreas Antoniou; Harrison Edwards; Amos Storkey

How to train your MAML

Abstract

The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem. Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++.

Code Repositories

gebob19/REPTILE-Metalearning
pytorch
Mentioned in GitHub
Tikquuss/meta_XLM
pytorch
Mentioned in GitHub
lgcollins/tr-maml
pytorch
Mentioned in GitHub
hoyeoplee/pytorch-maml
pytorch
Mentioned in GitHub
JWSoh/MZSR
tf
Mentioned in GitHub
gebob19/cscd94_metalearning
pytorch
Mentioned in GitHub
hfahrudin/reptile_implement_tf2
tf
Mentioned in GitHub
dkalpakchi/ReproducingSCAPytorch
pytorch
Mentioned in GitHub
gebob19/cscd94-metalearning
pytorch
Mentioned in GitHub

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
How to train your MAML | Papers | HyperAI