A survey of deep meta-learning
This work investigates and summarizes key methods of Deep Meta-Learning, which are categorized into (i) metric-, (ii) model-, and (iii) optimization-based techniques, and identifies the main open challenges.
Abstract
<jats:title>Abstract</jats:title><jats:p>Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources. However, their ability to learn new concepts<jats:italic>quickly</jats:italic>is limited. Meta-learning is one approach to address this issue, by enabling the network to learn how to learn. The field of<jats:italic>Deep Meta-Learning</jats:italic>advances at great speed, but lacks a unified, in-depth overview of current techniques. With this work, we aim to bridge this gap. After providing the reader with a theoretical foundation, we investigate and summarize key methods, which are categorized into (i) metric-, (ii) model-, and (iii) optimization-based techniques. In addition, we identify the main open challenges, such as performance evaluations on heterogeneous benchmarks, and reduction of the computational costs of meta-learning.</jats:p>