On unsen data the meta learning algorithm converges quicker
Showing meta learning initialization is better than random initialization
Meta-learning is not ready for production. The current algorithms suffer from instability and high computational costs and often require an arduous hyper parameter search. Our library is designed with these difficulties in mind. We offer the core functionality needed for those unfamiliar with meta-learning to easily implement it in their project, as well as tools for researchers to create new meta-learning algorithms (or modifications to existing ones) that seek to address these problems.
What it does
learn2learn is a PyTorch library through and through. learn2learn uses a modular approach and well engineered interface to provide a library that is powerful extension of torch at a high, mid and low level. High level functionality includes implementations of MAML and Meta-SGD, as well as a Task Generator that allows users to easily create a distribution to learn from. The implementations of MAML and Meta-SGD are simple and modular, so researchers working on improvements can easily modify them or build new algorithms using common components. At a low level, our algorithm provides functionality that is valuable to many torch users outside of the meta-learning paradigm, including a way to clone_modules that retain the parent's computational graph.
On top of core functionality, we provide a plethora of examples showcasing how to use our library in a variety of contexts including Text Analysis, Image Processing and Reinforcement Learning.
Challenges we ran into
learn2learn does not work for text (yet). We sought to leverage the new functionalities in torchtext, but had a hard time reshaping core tools and data sets to fit the meta-learning framework.
Accomplishments that we're proud of
- At the highest level it allow anyone to use meta learning
- Anyone can experiment with modifications to meta learning algorithms
- Anyone can dive deep into the code to write meta learning algorithms. It maintains compatibility with other pytorch libraries like torchvision and torchtext.
- We created a TaskGenerator for meta learning algorithms so we can have anybody create meta learning tasks from supervised datasets.
What's next for learn2learn
- Tools to support torchtext.
- MAML++ implementation.
- Tuning on Reinforcement Learning environments.