Command Palette
Search for a command to run...
Po-Sen Huang; Chenglong Wang; Rishabh Singh; Wen-tau Yih; Xiaodong He

Abstract
In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function. When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%-5.4% absolute accuracy gains over the non-meta-learning counterparts.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| code-generation-on-wikisql | PT-MAML (Huang et al., 2018) | Exact Match Accuracy: 62.8 Execution Accuracy: 68.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.