Tiramisu is a compiler for sparse and dense deep learning. It was created by an MIT CSAIL Team. Thanks to the polyhedral model it is based on, it can apply various code optimizations and it proved its efficiency in optimizing deep learning operators including RNNs and Sparse Neural Networks.
This project consists on integrating Tiramisu as a backend to PyTorch. This way, users of PyTorch will benefit from the acceleration obtained by Tiramisu optimizations.
Thanks to Torchscript, we could get PyTorch IR and convert it to Tiramisu IR while applying the fusion operator which turns out to be the most efficient optimization.
What's next for pytorch_tiramisu
--> Implementation and optimization of more deep learning operators --> Complete support of Sparse Neural Networks

Log in or sign up for Devpost to join the conversation.