Inspiration

This was inspired by a post in the fastai forums where Jeremy (the creator of the fastai library) concurred with another user about the importance of TPU support for the fastai library. He also mentioned that "it shouldn't be a big job to add either AFAICT..."

I felt intrigued by this statement and with much trepidation, I (Butch) decided to take on the challenge. I posted about this hackathon in another post on the fastai forums, suggesting the idea of forming a team to work on adding TPU support for the fastai library.

My now team mate (David) liked the idea and now there's two of us working together since then.

What it does

The package allows the fastai library to run on TPUs using Pytorch-XLA

How we built it

We are building it using Jupyter notebooks using a system developed by Jeremy and Sylvain (creators of the fastai library) called nbdev.

Moreover, nbdev automatically generates the python library package from the source Jupyter notebooks as well as the documentation.

This also allows us to build out the documentation alongside the development of the system, and keeping the documentation in sync with the code as well as the tests is a lot easier when your documentation is also an executable Jupyter notebook.

Since we need a TPU enabled environment to test out the library, we are also running and testing the libraries and Jupyter notebooks on Colab and Kaggle.

Challenges we ran into

The biggest challenge we are facing is the fact that the fastai library was built on the underlying assumption that it would be running either on a GPU or a CPU enabled environment.

I don't think they considered the possibility of running it on a TPU -- which is understandable, since this 2nd version of the library (fastai version 2 aka fastai2) was developed prior to, or around the same time the Pytorch XLA support for TPUs was announced.

Accomplishments that we're proud of

Our goal is to make the usage of TPUs on the fastai library as seamless as possible. It should take only the most minimal of changes to existing fastai code to run it on TPUs.

Another thing we are proud of is that for a small python package developed by 2 newbies on their spare time, it comes complete with documentation, samples and an installable python package via pip.

And even the samples are jupyter notebooks that can be runnable in one click on Google Colab.

If you want to check out our library, just click here to test it out.

What we learned

We learned a lot about the internals of the fastai library and learned some insights into the design decisions behind the library. We also learned a little bit of the Pytorch XLA APIs and what it takes to run Pytorch on TPUs.

What's next for fastai xla extension library

We are currently focused on running fastai on a single TPU core. Once we get that running well, we'll start focusing on running fastai on multiple cores.

Our eventual goal is to enable fastai to train large models, such as the HuggingFace Transformer models to do transfer learning on TPUs.

Built With

Share this project:

Updates